首页> 外文期刊>Advanced engineering informatics >Deep learning-based method for vision-guided robotic grasping of unknown objects
【24h】

Deep learning-based method for vision-guided robotic grasping of unknown objects

机译:基于深入学习的视觉引导机器人抓取物体的方法

获取原文
获取原文并翻译 | 示例
           

摘要

Nowadays, robots are heavily used in factories for different tasks, most of them including grasping and manipulation of generic objects in unstructured scenarios. In order to better mimic a human operator involved in a grasping action, where he/she needs to identify the object and detect an optimal grasp by means of visual information, a widely adopted sensing solution is Artificial Vision. Nonetheless, state-of-art applications need long training and fine-tuning for manually build the object's model that is used at run-time during the normal operations, which reduce the overall operational throughput of the robotic system. To overcome such limits, the paper presents a framework based on Deep Convolutional Neural Networks (DCNN) to predict both single and multiple grasp poses for multiple objects all at once, using a single RGB image as input. Thanks to a novel loss function, our framework is trained in an end-to-end fashion and matches state-of-art accuracy with a substantially smaller architecture, which gives unprecedented real-time performances during experimental tests, and makes the application reliable for working on real robots. The system has been implemented using the ROS framework and tested on a Baxter collaborative robot.
机译:如今,机器人大量用于不同任务的工厂,其中大多数包括掌握和操纵非结构化方案中的通用物体。为了更好地模仿涉及抓握动作的人类运营商,在那里他/她需要识别对象并通过视觉信息检测最佳掌握,广泛采用的感测解决方案是人为视觉。尽管如此,最先进的应用程序需要长期训练和微调,用于手动构建在正常操作期间在运行时使用的对象的模型,这降低了机器人系统的整体操作吞吐量。为了克服这些限制,本文介绍了一种基于深度卷积神经网络(DCNN)的框架,以预测单个RGB图像作为输入时的多个对象的姿势。由于新颖的损失功能,我们的框架以端到端的方式培训,并匹配了最先进的准确性,具有大幅较小的架构,这在实验测试中提供了前所未有的实时性能,并使应用可靠研究真正的机器人。该系统已经使用ROS框架实现并在Baxter协作机器人上进行测试。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号