...
首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction
【24h】

A Regularized Volumetric Fusion Framework for Large-Scale 3D Reconstruction

机译:用于大规模3D重建的正则化体积融合框架

获取原文
获取原文并翻译 | 示例
           

摘要

CModern computational resources combined with low-cost depth sensing systems have enabled mobile robots to reconstruct 3D models of surrounding environments in real-time. Unfortunately, low-cost depth sensors are prone to produce undesirable estimation noise in depth measurements which result in either depth outliers or introduce surface deformations in the reconstructed model. Conventional 3D fusion frameworks integrate multiple error-prone depth measurements over time to reduce noise effects, therefore additional constraints such as steady sensor movement and high frame-rates are required for high quality 3D models. In this paper we propose a generic 3D fusion framework with controlled regularization parameter which inherently reduces noise at the time of data fusion. This allows the proposed framework to generate high quality 3D models without enforcing additional constraints. Evaluation of the reconstructed 3D models shows that the proposed framework outperforms state of art techniques in terms of both absolute reconstruction error and processing time.
机译:C现代的计算资源与低成本的深度感应系统相结合,使移动机器人能够实时重建周围环境的3D模型。不幸的是,低成本的深度传感器易于在深度测量中产生不期望的估计噪声,这会导致深度离群值或在重建模型中引入表面变形。传统的3D融合框架会随着时间的推移集成多个易于出错的深度测量,以减少噪声影响,因此,高质量3D模型需要附加的约束条件,例如稳定的传感器移动和高帧频。在本文中,我们提出了一种具有可控正则化参数的通用3D融合框架,该框架固有地降低了数据融合时的噪声。这允许提出的框架生成高质量3D模型,而无需执行其他约束。对重建的3D模型的评估表明,在绝对重建误差和处理时间方面,所提出的框架均优于最新技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号