首页> 外文OA文献 >Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions
【2h】

Towards a platform-independent cooperative human-robot interaction system: II. Perception, execution and imitation of goal directed actions

机译:迈向独立于平台的合作人机交互系统:II。对目标导向行为的感知,执行和模仿

摘要

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including ?????????put X on Y????????? and ?????????take X?????????. In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions ?????????put????????? and ?????????take????????? demonstrating how the system learns to execute these same actions, along with new related actions ?????????cover????????? and ?????????uncover????????? based on the composition of action primitives ?????????grasp X????????? and ?????????release X at Y?????????. Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platform-independence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion. Finally in Experiment 5 we demonstrate how the combined representation of action in terms of perception and execution provides the basis for imitation. This provides the basis for a- open ended cooperation capability where new actions can be learned and integrated into shared plans for cooperation. Part of the novelty of this research is the robots' use of spoken language understanding and visual perception to generate action representations in a platform independent manner based on physical state changes. This provides a flexible capability for goal-directed action imitation.
机译:如果机器人要以越来越像人类的方式与人类合作,那么在观察和学习以灵活和自适应的方式执行新颖的目标定向动作的能力方面必须取得重大进展。当前的研究解决了这一挑战。在CHRIS.I [1]中,我们开发了一个独立于平台的感知系统,该系统从观察中学习,以一种从机器人平台的细节中抽象出来的方式来识别人类动作,学习动作包括????????? X on Y ?????????然后????????? X ?????????。在当前的研究中,我们将该系统从行动感知扩展到执行,这与人类对目标定向行动和目的论推理的最新发展研究相一致。我们通过在三种不同的机器人上进行的实验来证明平台的独立性。在实验1和2中,我们完成了之前对动作感知的研究。和????????? take ?????????演示系统如何学习执行这些相同的动作以及新的相关动作????????? cover ?????????和????????? uncover ?????????根据动作原语的组成????????? X ?????????并在Y处释放X。重要的是,然后基于马达原语的抽象层,在一个iCub机器人上学习的这些合成动作执行规范将在另一个iCub机器人上执行。实验3进一步验证了系统的平台独立性,因为随后在图卢兹的Jido机器人上执行了在里昂iCub上学习到的新动作。在实验4中,我们将动作知觉的定义扩展到包括代理的概念,这再次受到代理归因的发展研究的启发,利用Kinect运动捕获系统来跟踪人体运动。最后,在实验5中,我们演示了如何根据感知和执行来组合动作表示,从而为模仿提供基础。这为开放式合作能力提供了基础,在该能力中可以学习新的行动并将其整合到共享的合作计划中。这项研究的一部分新颖性是机器人使用口头语言理解和视觉感知来基于物理状态变化以独立于平台的方式生成动作表示。这为目标导向的动作模仿提供了灵活的功能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号