首页> 外文期刊>IEEE Transactions on Image Processing >Robust 3D Reconstruction of Dynamic Scenes From Single-Photon Lidar Using Beta-Divergences
【24h】

Robust 3D Reconstruction of Dynamic Scenes From Single-Photon Lidar Using Beta-Divergences

机译:使用β分解从单光子激光雷达的强大3D重建动态场景

获取原文
获取原文并翻译 | 示例

摘要

In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
机译:在本文中,我们使用由单光子探测器阵列记录的光子的到来的到达时,提供了一种快速,在线三维重建的新算法。在实际应用中使用单光子激光雷达的3D成像中的主要挑战之一是存在强大的环境照明,这些照明损坏了数据,并且可以危及信号中的峰/表面的检测。该背景噪声不仅使观察模型复杂化用于3D重建,而且使需要迭代方法的估计过程。在这项工作中,我们考虑一种用于鲁棒深度估计的新的相似性度量,其允许我们使用简单的观察模型和非迭代估计程序,同时稳健地进行错误规范的背景照明模型。该选择导致计算上有吸引力的深度估计过程,而无需重建性能的显着降低。该新深度估计过程与时空模型耦合,以捕获相邻像素与动态场景分析的连续帧之间的自然相关性。由此产生的在线推理过程是可扩展的,非常适合并行实现。通过用模拟和真正的单光子LIDAR视频进行的一系列实验来证明所提出的方法的益处,允许在极端环境照明条件下观察到325米的动态场景分析。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号