首页> 外文会议>IEEE International Conference on Development and Learning >Learning grasping affordances from local visual descriptors
【24h】

Learning grasping affordances from local visual descriptors

机译:从本地视觉描述符学习掌握

获取原文

摘要

In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.
机译:在本文中,我们通过自我实验研究了学习的能力。我们研究了对当地视觉描述符的学习,预测对象执行的给定行动的成功。例如,考虑掌握的情况。虽然抓住是整个对象的属性,但如果应用于对象的右侧部分,则掌握操作只会成功。我们提出了一种算法,用于基于机器人执行的一系列试验学习良好掌握点的本地视觉描述符。该方法估计基于简单的本地特征的成功动作(掌握)的概率。人形机器人的实验结果说明了我们的方法能够如何学习良好的掌握点的描述符并概括基于先前经验的新型物体。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号