【24h】

Deep End-to-End Time-of-Flight Imaging

机译:深度端到端飞行时间成像

获取原文

摘要

We present an end-to-end image processing framework for time-of-flight (ToF) cameras. Existing ToF image processing pipelines consist of a sequence of operations including modulated exposures, denoising, phase unwrapping and multipath interference correction. While this cascaded modular design offers several benefits, such as closed-form solutions and power-efficient processing, it also suffers from error accumulation and information loss as each module can only observe the output from its direct predecessor, resulting in erroneous depth estimates. We depart from a conventional pipeline model and propose a deep convolutional neural network architecture that recovers scene depth directly from dual-frequency, raw ToF correlation measurements. To train this network, we simulate ToF images for a variety of scenes using a time-resolved renderer, devise depth-specific losses, and apply normalization and augmentation strategies to generalize this model to real captures. We demonstrate that the proposed network can efficiently exploit the spatio-temporal structures of ToF frequency measurements, and validate the performance of the joint multipath removal, denoising and phase unwrapping method on a wide range of challenging scenes.
机译:我们介绍了飞行时间(ToF)相机的端到端图像处理框架。现有的ToF图像处理流水线包括一系列操作,包括调制曝光,去噪,相位展开和多径干扰校正。尽管这种级联的模块化设计具有多种优势,例如闭式解决方案和高能效处理,但由于每个模块只能观察其直接前身的输出,因此还会遭受错误累积和信息丢失的困扰,从而导致错误的深度估算。我们从传统的流水线模型出发,提出了一种深度卷积神经网络架构,该架构可直接从双频原始ToF相关性测量值中恢复场景深度。为了训练该网络,我们使用时间分辨渲染器模拟了各种场景的ToF图像,设计了特定于深度的损耗,并应用归一化和增强策略来将该模型推广到实际捕获中。我们证明了所提出的网络可以有效地利用ToF频率测量的时空结构,并在各种挑战性场景中验证联合多径去除,去噪和相位展开方法的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号