首页> 外文会议>IEEE International Conference on Robotics and Automation >Towards a unifying grasp representation for imitation learning on humanoid robots
【24h】

Towards a unifying grasp representation for imitation learning on humanoid robots

机译:朝着人形机器人仿制学习的统一掌握表示

获取原文
获取外文期刊封面目录资料

摘要

In this paper, we present a grasp representation in task space exploiting position information of the fingertips. We propose a new way for grasp representation in the task space, which provides a suitable basis for grasp imitation learning. Inspired by neuroscientific findings, finger movement synergies in the task space together with fingertip positions are used to derive a parametric low-dimensional grasp representation. Taking into account correlating finger movements, we describe grasps using a system of virtual springs to connect the fingers, where different grasp types are defined by parameterizing the spring constants. Based on such continuous parameterization, all instantiation of grasp types and all hand preshapes during a grasping action (reach, preshape, enclose, open) can be represented. We present experimental results, in which the spring constants are merely estimated from fingertip motion tracking using a stereo camera setup of a humanoid robot. The results show that the generated grasps based on the proposed representation are similar to the observed grasps.
机译:在本文中,我们介绍了指尖的任务空间掌握表示的掌握表示。我们为在任务空间中的掌握代表中提出了一种新的方式,为掌握模仿学习提供了合适的基础。灵感来自神经科学发现,使用指尖位置的任务空间中的手指运动协同效应来得出参数低维掌握表示。考虑到关联手指移动,我们使用虚拟弹簧系统描述Grasps来连接手指,其中通过参数化弹簧常数来定义不同的掌握类型。基于此类连续参数化,可以代表掌握类型的所有实例化掌握类型和所有手中预置的夹持。我们存在实验结果,其中使用人形机器人的立体相机设置仅从指尖运动跟踪估计弹簧常数。结果表明,基于所提出的表示的生成的抓握类似于观察到的掌握。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号