首页> 外文会议>International Conference on Development and Learning and Epigenetic Robotics >Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - a deep learning approach
【24h】

Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - a deep learning approach

机译:在人形机器人上通过视觉-本体-触觉集成实现发展-一种深度学习方法

获取原文

摘要

The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of “visually elicited” reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.
机译:婴儿伸手的发育已经研究了近九十年。最初,人们认为早期到达是在视觉上引导的,但是最近的证据提示“在视觉上引起”到达,即婴儿在到达运动中注视着物体而不是它的手。还强调了触觉反馈的重要性。受这些发现的启发,在这项工作中,我们使用了模拟的iCub人形机器人来构建达到发展的模型。向机器人展示不同的对象,凝视它们,并用其一只手臂进行马达胡言乱语。通过手和前臂上的触觉传感器检测与对象的成功接触。这些事件用作训练集,由机器人两只眼睛,头部关节,触觉激活和手臂关节的图像组成。通过图像和头部关节作为输入以及手臂配置和触摸作为输出来训练深度神经网络。学习后,网络可以成功推断出将导致成功到达的手臂配置以及对触觉激活的预测(即哪个身体部位会进行接触)。我们的主要贡献有两个方面:(i)我们的管道是端到端的,从立体图像和头部关节(6 DoF)到armtorso配置(10 DoF)和触觉激活,而无需任何预处理,显式坐标转换等; (ii)这种方法独有,可以达到与敏感皮肤不同区域相对应的多个效应器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号