首页> 外文OA文献 >Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset
【2h】

Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset

机译:基于事件的机器人掌握检测与神经胸视觉传感器和事件 - 掌握数据集

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Robotic grasping plays an important role in the field of robotics. The current state-of-the-art robotic grasping detection systems are usually built on the conventional vision, such as the RGB-D camera. Compared to traditional frame-based computer vision, neuromorphic vision is a small and young community of research. Currently, there are limited event-based datasets due to the troublesome annotation of the asynchronous event stream. Annotating large scale vision datasets often takes lots of computation resources, especially when it comes to troublesome data for video-level annotation. In this work, we consider the problem of detecting robotic grasps in a moving camera view of a scene containing objects. To obtain more agile robotic perception, a neuromorphic vision sensor (Dynamic and Active-pixel Vision Sensor, DAVIS) attaching to the robot gripper is introduced to explore the potential usage in grasping detection. We construct a robotic grasping dataset named Event-Grasping dataset with 91 objects. A spatial-temporal mixed particle filter (SMP Filter) is proposed to track the LED-based grasp rectangles, which enables video-level annotation of a single grasp rectangle per object. As LEDs blink at high frequency, the Event-Grasping dataset is annotated at a high frequency of 1 kHz. Based on the Event-Grasping dataset, we develop a deep neural network for grasping detection that considers the angle learning problem as classification instead of regression. The method performs high detection accuracy on our Event-Grasping dataset with 93% precision at an object-wise level split. This work provides a large-scale and well-annotated dataset and promotes the neuromorphic vision applications in agile robot.
机译:机器人抓斗在机器人学领域发挥着重要作用。目前最先进的机器人抓取检测系统通常基于传统视觉,例如RGB-D相机。与传统的基于帧的计算机视觉相比,神经形态视觉是一个小型和年轻的研究界。目前,由于异步事件流的麻烦注释,存在基于事件的数据集。注释大规模视觉数据集通常需要大量的计算资源,尤其是在谈到视频级注释的麻烦数据时。在这项工作中,我们考虑在包含对象的场景的运动摄像机视图中检测机器人格拉斯的问题。为了获得更敏捷的机器人感知,引入了连接到机器人夹具的神经形态视觉传感器(动态和有源像素视觉传感器,戴维斯),以探讨掌握检测中的潜在用法。我们构建一个名为Event-Grasping DataSet的机器人掌握数据集,具有91个对象。提出了一种空间混合粒子滤波器(SMP滤波器)以跟踪基于LED的掌握矩形,这使得每个对象的单个掌握矩形的视频级注释能够实现。当LED以高频闪烁时,Event-Grasping DataSet以1 kHz的高频注释为注释。基于事件掌握数据集,我们开发了一个深度神经网络,用于掌握检测,将角度学习问题视为分类而不是回归。该方法对我们的事件掌握数据集进行了高的检测精度,在对象 - 明智的水平拆分时具有93%的精度。这项工作提供了大规模和良好的注释数据集,并在敏捷机器人中促进神经形态视觉应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号