首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications
【2h】

Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications

机译:基于可用的掌握点检测使用工业宾馆采摘应用的图表卷积网络

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.
机译:掌握点检测传统上是核心机器人和计算机视觉问题。近年来,基于深度学习的方法已被广泛用于预测抓取点,并在不确定性下显示出强大的泛化能力。特别地,目的在不依赖于对象标识的情况下预测对象可取性的方法已经获得了随机宾馆拣选应用的有希望的结果。然而,其中大多数依赖于RGB / RGB-D图像,并且不清楚3D空间信息的范围。图表卷积网络(GCNS)已成功用于点云中的对象分类和场景分段,以及预测简单实验室实验中的掌握点。在本提案中,我们将深图卷积网络模型改编,直觉从N维点云学习会导致性能提升以预测对象可接受的能力。据我们所知,这是第一次应用GCN来预测工业箱采摘环境中吸入和夹持器末端效应的可供选择。此外,我们设计了一个进入的面向垃圾桶的数据预处理管道,有助于缓解学习过程,并为任何进入拣选应用创建灵活的解决方案。要培训我们的模型,我们创建了一个高度精确的RGB-D / 3D数据集,可根据需要公开可用。最后,我们对基于2D的完全卷积网络的方法进行了基准测试,分别将前1个精度得分提高了1.8%和1.7%的吸力和夹具。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号