首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >High Speed and High Dynamic Range Video with an Event Camera
【24h】

High Speed and High Dynamic Range Video with an Event Camera

机译:具有活动相机的高速和高动态范围视频

获取原文
获取原文并翻译 | 示例

摘要

Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos (> 5, 000 frames per second) of high-speed phenomena (e.g., a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data. We release the reconstruction code, a pre-trained model and the datasets to enable further research.
机译:事件摄像机是新颖的传感器,其报告亮度变化为异步“事件”流的形式而不是强度帧。它们对传统摄像机提供了显着的优势:高时分辨率,高动态范围,没有运动模糊。虽然事件流原则上编码完整的视觉信号,但是在事件流中重建强度图像在实践中是一个不存在的问题。现有的重建方法基于手工制作的前瞻和关于成像过程的强烈假设以及自然图像的统计数据。在这项工作中,我们建议学习从事件流直接从数据重建强度图像而不是依赖于任何手工制作的前沿。我们提出了一种新型的经常性网络来重建事件流中的视频,并在大量模拟事件数据上培训。在培训期间,我们建议利用感知损失来鼓励重建遵循自然形象统计。我们进一步扩展了我们从彩色事件流综合彩色图像的方法。我们的定量实验表明,在图像质量(> 20%)方面,我们的网络超越了最先进的重建方法,而实时舒适地运行。我们表明该网络能够合成高速现象(例如,击中物体的子弹)的高帧视频(> 5,000帧)(例如,击中物体),并且能够在具有挑战性的照明条件下提供高动态范围重建。作为额外贡献,我们展示了我们的重建的有效性作为事件数据的中间代表性。我们表明,现成的计算机视觉算法可以应用于我们的重建,以进行对象分类和视觉惯性内径测量等任务,并且该策略一致优于用于事件数据专门设计的算法。我们释放重建代码,预先训练的模型和数据集,以实现进一步的研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号