首页> 外文会议>Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on >Visual transformations in gesture imitation: what you see is what you do
【24h】

Visual transformations in gesture imitation: what you see is what you do

机译:手势模仿中的视觉转换:所见即所得

获取原文

摘要

We propose an approach for a robot to imitate the gestures of a human demonstrator. Our framework consists solely of two components: a Sensory-Motor Map (SMM) and a View-Point Transformation (VPT). The SMM establishes an association between an arm image and the corresponding joint angles and it is learned by the system during a period of observation of its own gestures. The VPT is widely discussed in the psychology of visual perception and is used to transform the image of the demonstrator's arm to the so-called ego-centric image, as if the robot were observing its own arm. Different structures of the SMM and VPT are proposed in accordance with observations in human imitation. The whole system relies on monocular visual information and leads to a parsimonious architecture for learning by imitation. Real-time results are presented and discussed.
机译:我们提出了一种机器人模仿人类演示者手势的方法。我们的框架仅包含两个组件:感官图(SMM)和视点转换(VPT)。 SMM在手臂图像和相应的关节角度之间建立关联,并且在观察自己的手势期间由系统学习。 VPT在视觉感知心理学中得到了广泛讨论,并被用来将演示者手臂的图像转换为所谓的以自我为中心的图像,就像机器人在观察自己的手臂一样。根据对人类模仿的观察,提出了SMM和VPT的不同结构。整个系统依赖于单眼视觉信息,并形成了一种用于模仿学习的简约架构。实时结果进行了介绍和讨论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号