In order to be effective, learning of robotic motion by demonstration should not remain limited to direct repetition of movements, but should enable modifications with respect to the state of the external environment, and generation of actions for previously unencountered situations. In this paper we propose an approach that combines these two features, and applies them in the framework of dynamic movement primitives (DMP). The proposed approach is based on the notion of motion adaptation through the use of coupling terms introduced to the DMPs at the velocity level. The coupling term is learned in a few repetitions of the motion with iterative learning control (ILC). The adaptation, which is based on force feedback, derives from either autonomous contact with the environment, or from human intervention. It can adapt to a given constraint, e.g., to a desired force of contact or to a given position. The major novelty of this paper is in extending this notion with statistical generalization between the coupling terms, allowing online adaptation of motion to a previously unexplored situation. The benefit of the approach is in reduced effort in human demonstration, because a single demonstration can be autonomously adapted to different situations with ILC, and recording the learned coupling terms builds up a database for generalization. A side-effect of learning, which takes a few iterations, is that also the coupling terms of the learning attempts can be stored in the database, allowing for different generalization queries and outcomes. In the paper we provide the details on the approach, followed by simulated and real-world evaluations.
展开▼