首页> 外文会议>IEEE International Conference on Robotics and Biomimetics >Learning complex assembly skills from kinect based human robot interaction
【24h】

Learning complex assembly skills from kinect based human robot interaction

机译:从基于kinect的人机交互中学习复杂的组装技能

获取原文

摘要

Acquiring complex assembly skills is still a challenging task for robot programming. Because of the sensory and body structure differences, the human knowledge has to be demonstrated, recorded, converted and finally learned by the robot, in an inexplicit and indirect way. During this process, “how to demonstrate”, “how to convert” and “how to learn” are the key problems. In this paper, Kinect sensor is utilized to provide the behavior information of the human demonstrator. Through natural human robot interaction, body skeleton and joint 3D coordinates are provided in real-time, which can fully describe the human intension and task related skills. To overcome the structural and individual differences, a Cartesian level unified mapping method is proposed to convert the human motion and match the specified robot. The recorded data set are modeled using Gaussian mixture model(GMM) and Gaussian mixture regression(GMR), which can extract redundancies across multiple demonstrations and build robust models to regenerate the dynamics of the recorded movements. The proposed methodologies are implemented in the imNEU humanoid robot platform. Experimental results verify the effectiveness.
机译:对于机器人编程而言,获得复杂的组装技能仍然是一项艰巨的任务。由于感官和身体结构的差异,必须由机器人以一种不明确和间接的方式来展示,记录,转换和最终学习人类知识。在这个过程中,“如何展示”,“如何转变”和“如何学习”是关键问题。在本文中,Kinect传感器用于提供人类示威者的行为信息。通过自然的人机交互,实时提供人体骨骼和关节3D坐标,可以充分描述人的意图和与任务相关的技能。为了克服结构差异和个体差异,提出了一种笛卡尔级统一映射方法来转换人体运动并匹配指定的机器人。所记录的数据集使用高斯混合模型(GMM)和高斯混合回归(GMR)进行建模,可以提取多个演示中的冗余,并建立健壮的模型以重新生成所记录运动的动态。所提出的方法在imNEU人形机器人平台中实现。实验结果证明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号