首页> 外文会议>IEEE/RSJ International Conference on Intelligent Robots and Systems >What's in the container? Classifying object contents from vision and touch
【24h】

What's in the container? Classifying object contents from vision and touch

机译:容器里有什么?根据视觉和触觉对对象内容进行分类

获取原文

摘要

Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content is in a container based on tactile and/or visual feedback in combination with grasping. In particular, we investigate the benefits of using unimodal (visual or tactile) or bimodal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers with liquid or solid content or being empty. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. Our results show that we achieve comparable classification rates with unimodal data and that the visual and tactile data are complimentary.
机译:在家庭环境中运行的机器人需要与不同类型的食物容器进行交互。容器中是否装有牛奶,果汁,酸奶或咖啡都可能影响机器人抓取和操纵容器的方式。在本文中,我们集中在基于触觉和/或视觉反馈并结合抓握来识别容器中的内容种类的问题。特别是,我们调查了为此目的使用单峰(视觉或触觉)或双峰(视觉触觉)感官数据的好处。我们将研究方向放在液体或固体或空的硬纸板容器中。使用抓握而不是摇动的动机是,我们希望在对容器进行操作之前对内容进行调查。我们的结果表明,与单峰数据相比,我们达到了可比的分类率,并且视觉和触觉数据是互补的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号