首页> 外文会议>ISPE Inc. International Conference on Transdisciplinary Engineering >Deep Learning-Based Method for Vision-Guided Robotic Grasping of Unknown Objects
【24h】

Deep Learning-Based Method for Vision-Guided Robotic Grasping of Unknown Objects

机译:基于深入学习的视觉引导机器人抓取物体的方法

获取原文

摘要

Collaborative robots must operate safely and efficiently in ever-changing unstructured environments, grasping and manipulating many different objects. Artificial vision has proved to be collaborative robots' ideal sensing technology and it is widely used for identifying the objects to manipulate and for detecting their optimal grasping. One of the main drawbacks of state of the art robotic vision systems is the long training needed for teaching the identification and optimal grasps of each object, which leads to a strong reduction of the robot productivity and overall operating flexibility. To overcome such limit, we propose an engineering method, based on deep learning techniques, for the detection of the robotic grasps of unknown objects in an unstructured environment, which should enable collaborative robots to autonomously generate grasping strategies without the need of training and programming. A novel loss function for the training of the grasp prediction network has been developed and proved to work well also with low resolution 2-D images, then allowing the use of a single, smaller and low cost camera, that can be better integrated in robotic end-effectors. Despite the availability of less information (resolution and depth) a 75% of accuracy has been achieved on the Cornell data set and it is shown that our implementation of the loss function does not suffer of the common problems reported in literature. The system has been implemented using the ROS framework and tested on a Baxter collaborative robot.
机译:协作机器人必须在不断变化的环境中的非结构化,抓住和操纵许多不同的对象安全,高效地运行。已证明人工视力已被证明是合作机器人的理想传感技术,并且广泛用于识别以操纵物体和检测其最佳抓握的物体。最先进的机器人视觉系统的主要缺点之一是教导每个物体的识别和最佳掌握所需的长期训练,这导致机器人生产力的强烈降低和整体操作灵活性。为了克服此类限制,我们提出了一种基于深度学习技术的工程方法,用于检测一个非结构化环境中未知物体的机器人掌握,这应该使协作机器人自主地在不需要培训和编程的情况下自主地生成掌握策略。用于把握预测网络的训练一种新颖的损失函数已经开发并证明工作良好也与低分辨率2- d的图像,然后允许使用一个单一的,体积更小,成本低的相机,其可以更好地集成在机器人最终效果。尽管有较少的信息(分辨率和深度),但在康奈尔数据集中已经实现了75%的准确性,并且表明我们的损失功能的实施不会遭受文学中报告的常见问题。该系统已经使用ROS框架实现并在Baxter协作机器人上进行测试。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号