首页> 外文期刊>Frontiers in Neurorobotics >Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors
【24h】

Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

机译:使用基于事件的动态视觉传感器进行低延迟线路跟踪

获取原文
           

摘要

In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor.
机译:为了安全地在其周围环境中导航和定向,自治系统需要从环境中快速提取并持续跟踪视觉特征。尽管有许多算法可以解决传统基于帧的相机的这些任务,但这些算法必须处理以下事实:常规相机以固定频率对环境进行采样。最突出的是,必须在连续的帧中找到相同的特征,然后需要使用精细的技术来匹配相应的特征,因为这会丢失两个帧之间的任何信息。我们介绍一种新颖的方法来检测和跟踪基于事件的硅视网膜数据流中的线结构[也称为动态视觉传感器(DVS)]。与常规相机相比,这些受生物启发的传感器生成的视觉信息准连续流类似于哺乳动物视网膜中神经节细胞产生的信息流。 DVS的所有像素在没有周期性采样率的情况下异步运行,并且一旦它们感觉到亮度变化超过可调整的阈值,就会发出所谓的DVS地址事件。我们使用DVS所获得的高时间分辨率来连续地跟踪特征,而不仅仅是在固定的时间点。这项工作的重点是在移动相机通常是移动设备的典型摄影机中观察的,通常是静态的环境中跟踪线。由于DVS事件主要在对象边界和边缘生成,在人造环境中它们经常形成直线,因此将它们选择为要跟踪的特征。我们的方法基于检测x-y-t-空间中DVS地址事件的平面并通过时间跟踪这些平面。它具有出色的抗噪声能力,并可以在标准计算机上实时运行,因此适用于低延迟机器人。在真实世界的数据集上评估功效和性能,这些数据集显示了办公楼中的人造结构,其中使用了事件数据进行跟踪,使用帧数据进行DAVIS240C传感器的地面真实性评估。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号