首页> 外文会议>IEEE-RAS International Conference on Humanoid Robots >Multi-model approach based on 3D functional features for tool affordance learning in robotics
【24h】

Multi-model approach based on 3D functional features for tool affordance learning in robotics

机译:基于3D功能特征的多模型方法,用于机器人工具可供学习

获取原文

摘要

Tools can afford similar functionality if they share some common geometrical features. Moreover, the effect that can be achieved with a tool depends as much on the action performed as on the way in which it is grasped. In the current paper we present a two step model for learning and predicting tool affordances which specifically tackles these issues. In the first place, we introduce Oriented Multi-Scale Extended Gaussian Image (OMS-EGI), a set of 3D features devised to describe tools in interaction scenarios, able to encapsulate in a general and compact way the geometrical properties of a tool relative to the way in which it is grasped. Then, based on these features, we propose an approach to learn and predict tool affordances in which the robot first discovers the available tool-pose categories of a set of hand-held tools, and then learns a distinct affordance model for each of the discovered tool-pose categories. Results show that the combination of OMS-EGI 3D features and multi-model affordance learning approach is able to produce quite accurate predictions of the effect that an action performed with a tool grasped on a particular way will have, even for unseen tools or grasp configurations.
机译:如果它们共享一些常见的几何特征,工具可以提供类似的功能。此外,通过工具可以实现的效果在于在掌握它的方式上执行的动作。在目前的论文中,我们为学习和预测工具提供的两步模型,该工具提供了专门解决这些问题。首先,我们介绍了面向的多尺度扩展高斯图像(OMS-EGI),一组3D功能设计用于描述交互方案中的工具,能够以一般而紧凑的方式封装工具相对于的几何特性它被抓住的方式。然后,基于这些特征,我们提出了一种学习和预测工具可取性的方法,其中机器人首先发现一组手持工具的可用工具姿势类别,然后为每个发现的每个发现的可供选择性模型学习工具构成类别。结果表明,OMS-EGI 3D特征和多模型提供学习方法的组合能够产生相当准确的预测,即甚至针对看不见的工具或掌握配置,均匀地对特定方式进行的工具执行的动作的效果。 。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号