首页> 外文会议>European Conference on Computer Vision >Jointly Learning Visual Motion and Confidence from Local Patches in Event Cameras
【24h】

Jointly Learning Visual Motion and Confidence from Local Patches in Event Cameras

机译:在活动相机中共同学习视觉运动和自信的地方

获取原文

摘要

We propose the first network to jointly learn visual motion and confidence from events in spatially local patches. Event-based sensors deliver high temporal resolution motion information in a sparse, non-redundant format. This creates the potential for low computation, low latency motion recognition. Neural networks which extract global motion information, however, are generally computationally expensive. Here, we introduce a novel shallow and compact neural architecture and learning approach to capture reliable visual motion information along with the corresponding confidence of inference. Our network makes a prediction of the visual motion at each spatial location using only local events. Our confidence network then identifies which of these predictions will be accurate. In the task of recovering pan-tilt ego velocities from events, we show that each individual confident local prediction of our network can be expected to be as accurate as state of the art optimization approaches which utilize the full image. Furthermore, on a publicly available dataset, we find our local predictions generalize to scenes with camera motions and the presence of independently moving objects. This makes the output of our network well suited for motion based tasks, such as the segmentation of independently moving objects. We demonstrate on a publicly available motion segmentation dataset that restricting predictions to confident regions is sufficient to achieve results that exceed state of the art methods.
机译:我们提出了第一个网络,共同学习了空间本地补丁中的事件的视觉运动和信心。基于事件的传感器以稀疏,非冗余格式提供高时间分辨率运动信息。这产生了低计算,低延迟运动识别的可能性。然而,提取全局运动信息的神经网络通常是计算昂贵的。在这里,我们介绍了一种新颖的浅和紧凑的神经结构和学习方法,以捕获可靠的视觉运动信息以及相应的推断置信。我们的网络使用仅当地事件预测每个空间位置的视觉运动。我们的信心网络然后识别这些预测中的哪一个是准确的。在从事件中恢复Pan-Tilt Ego速度的任务中,我们表明,可以预期每个网络对我们网络的自信地预测是利用完整图像的最新状态的状态。此外,在公开的数据集上,我们发现我们的本地预测概括到具有相机动作的场景和独立移动物体的存在。这使我们的网络输出适用于基于运动的任务,例如独立移动对象的分割。我们在公开的运动分割数据集上展示限制对自信地区的预测足以实现超过现有方法的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号