Robot assistants in the same environment with humans have to interact with humans and learn or at least adapt to individual human needs. One of the core abilities is learning from human demonstrations, were the robot is supposed to observe the execution of a task, acquire task knowledge and reproduce it. In this paper, a system to interpret and reason over demonstrations of household tasks is presented. The focus is on the model based representation of manipulation tasks, which serves as a basis for reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the knowledge. A measure for the assessment of information content of task features is introduced that relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well.
展开▼