首页> 外文会议>IEEE International Conference on Autonomous Robot Systems and Competitions >“iCub, clean the table!” A robot learning from demonstration approach using deep neural networks
【24h】

“iCub, clean the table!” A robot learning from demonstration approach using deep neural networks

机译:“ iCub,清理桌子!”机器人使用深度神经网络从演示方法中学习

获取原文

摘要

Autonomous service robots have become a key research topic in robotics, particularly for household chores. A typical home scenario is highly unconstrained and a service robot needs to adapt constantly to new situations. In this paper, we address the problem of autonomous cleaning tasks in uncontrolled environments. In our approach, a human instructor uses kinestethic demonstrations to teach a robot how to perform different cleaning tasks on a table. Then, we use Task Parametrized Gaussian Mixture Models (TP-GMMs) to encode the demonstrations variability, while providing appropriate generalization abilities. TP-GMMs extend Gaussian Mixture Models with an auxiliary set of reference frames, in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitude or orientations. However, the reference frames (that parametrize TP-GMMs) can be very difficult to extract in practice, as it may require segmenting the cluttered images of the working table-top. Instead, in this work the reference frames are automatically extracted from robot camera images, using a deep neural network that was trained during human demonstrations of a cleaning task. This approach has two main benefits: (i) it takes the human completely out of the loop while performing complex cleaning tasks; and (ii) the network is able to identify the specific task to be performed directly from image data, thus also enabling automatic task selection from a set of previously demonstrated tasks. The system was implemented on the iCub humanoid robot. During the tests, the robot was able to successfully clean a table with two different types of dirt (wiping a marker's scribble or sweeping clusters of lentils).
机译:自主服务机器人已成为机器人技术(尤其是家务)中的关键研究主题。典型的家庭场景非常不受限制,服务机器人需要不断适应新情况。在本文中,我们解决了在不受控制的环境中自主清洁任务的问题。在我们的方法中,一位人类教练通过运动学示范来教机器人如何在桌子上执行不同的清洁任务。然后,我们使用任务参数化高斯混合模型(TP-GMM)来编码演示变量,同时提供适当的泛化能力。 TP-GMM用一组辅助参考框架扩展了高斯混合模型,以便将演示推断到不同的任务参数,例如运动位置,幅度或方向。但是,实际上很难提取参考帧(对TP-GMM进行参数设置),因为它可能需要分割工作台的混乱图像。取而代之的是,在这项工作中,参考帧是使用在人类演示清洁任务过程中经过训练的深度神经网络从机器人摄像机图像中自动提取的。这种方法有两个主要好处:(i)在执行复杂的清洁任务时,它使人员完全摆脱了循环; (ii)网络能够直接从图像数据中识别出要执行的特定任务,从而还能够从一组先前展示的任务中自动选择任务。该系统在iCub人形机器人上实现。在测试过程中,机器人能够使用两种不同类型的污垢(擦拭标记笔的笔迹或扫除小扁豆簇)成功清洁桌子。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号