首页> 外文会议>2011 IEEE International Conference on Robotics and Automation >Towards a unifying grasp representation for imitation learning on humanoid robots
【24h】

Towards a unifying grasp representation for imitation learning on humanoid robots

机译:迈向人形机器人模仿学习的统一把握表示

获取原文

摘要

In this paper, we present a grasp representation in task space exploiting position information of the fingertips. We propose a new way for grasp representation in the task space, which provides a suitable basis for grasp imitation learning. Inspired by neuroscientific findings, finger movement synergies in the task space together with fingertip positions are used to derive a parametric low-dimensional grasp representation. Taking into account correlating finger movements, we describe grasps using a system of virtual springs to connect the fingers, where different grasp types are defined by parameterizing the spring constants. Based on such continuous parameterization, all instantiation of grasp types and all hand preshapes during a grasping action (reach, preshape, enclose, open) can be represented. We present experimental results, in which the spring constants are merely estimated from fingertip motion tracking using a stereo camera setup of a humanoid robot. The results show that the generated grasps based on the proposed representation are similar to the observed grasps.
机译:在本文中,我们利用指尖的位置信息提出了任务空间中的把握表示。我们提出了一种新的方法来掌握任务空间中的表示,这为掌握模仿学习提供了合适的基础。受神经科学发现的启发,任务空间中的手指运动协同作用以及指尖位置可用于得出参数化的低维抓取表示。考虑到相关的手指运动,我们使用虚拟弹簧系统连接手指来描述抓握,其中通过参数化弹簧常数来定义不同的抓握类型。基于这种连续的参数化,可以表示在抓握动作期间所有抓握类型的实例化和所有手的预塑形(伸展,预塑形,封闭,张开)。我们提出了实验结果,其中弹簧常数仅是通过使用类人机器人的立体摄像头设置进行的指尖运动跟踪来估算的。结果表明,基于拟议表示生成的抓取与观察到的抓取相似。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号