首页> 外文会议>International Symposium on Experimental Robotics >Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection
【24h】

Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection

机译:用大规模数据收集学习机器人抓地力的手眼协调

获取原文

摘要

We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.
机译:我们描述了一种基于学习的手动协调方法,用于从单眼图像中掌握机器人掌握。为了学习用于抓握的手眼协调,我们培训了一个大型卷积神经网络,以预测夹具的任务空间运动将导致成功的掌握,使用单目相机图像和独立于相机校准或当前机器人姿势来预测概要。这要求网络观察夹具和场景中的物体之间的空间关系,从而学习手眼协调。然后,我们将这个网络实时伺服夹具来实现成功的掌握。要培训我们的网络,我们在两个月内收集超过800,000次掌握尝试,在任何特定时间使用6到14个机器人操纵器,具有相机放置和硬件的差异。我们的实验评估表明,我们的方法实现了有效的实时控制,可以成功掌握新颖的物体,并通过连续伺服来纠正错误。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号