首页> 外文会议>International Workshop on Human Friendly Robotics >Multi-modal Intention Prediction with Probabilistic Movement Primitives
【24h】

Multi-modal Intention Prediction with Probabilistic Movement Primitives

机译:概率运动原语的多模态意图预测

获取原文

摘要

This paper proposes a method for multi-modal prediction of intention based on a probabilistic description of movement primitives and goals. We target dyadic interaction between a human and a robot in a collaborative scenario. The robot acquires multi-modal models of collaborative action primitives containing gaze cues from the human partner and kinetic information about the manipulation primitives of its arm. We show that if the partner guides the robot with the gaze cue, the robot recognizes the intended action primitive even in the case of ambiguous actions. Furthermore, this prior knowledge acquired by gaze greatly improves the prediction of the future intended trajectory during a physical interaction. Results with the humanoid iCub are presented and discussed.
机译:本文提出了一种基于运动原语和目标的概率描述的意图多模态预测方法。我们在协作情景中瞄准人和机器人之间的二元相互作用。机器人获取包含来自人类伴侣的凝视线索的协同作用原语的多模态模型以及有关其手臂操纵原语的动力学信息。我们表明,如果合作伙伴将机器人带入凝视提示,即使在模糊行动的情况下,机器人即使在含糊不清的情况下也能识别预期的动作原语。此外,通过凝视获取的先前知识极大地改善了物理交互期间对未来预期轨迹的预测。呈现和讨论了人形ICUB的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号