...
首页> 外文期刊>ACM Transactions on Graphics >BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration
【24h】

BundleFusion: Real-Time Globally Consistent 3D Reconstruction Using On-the-Fly Surface Reintegration

机译:BundleFusion:使用实时表面重新集成进行实时全局一致的3D重建

获取原文
获取原文并翻译 | 示例

摘要

Real-time, high-quality, 3D scanning of large-scale scenes is key to mixed reality and robotic applications. However, scalability brings challenges of drift in pose estimation, introducing significant errors in the accumulated model. Approaches often require hours of offline processing to globally correct model errors. Recent online methods demonstrate compelling results but suffer from (1) needing minutes to perform online correction, preventing true real-time use; (2) brittle frame-to-frame (or frame-to-model) pose estimation, resulting in many tracking failures; or (3) supporting only unstructured point-based representations, which limit scan quality and applicability. We systematically address these issues with a novel, real-time, end-to-end reconstruction framework. At its core is a robust pose estimation strategy, optimizing per frame for a global set of camera poses by considering the complete history of RGB-D input with an efficient hierarchical approach. We remove the heavy reliance on temporal tracking and continually localize to the globally optimized frames instead. We contribute a parallelizable optimization framework, which employs correspondences based on sparse features and dense geometric and photometric matching. Our approach estimates globally optimized (i.e., bundle adjusted) poses in real time, supports robust tracking with recovery from gross tracking failures (i.e., relocalization), and re-estimates the 3D model in real time to ensure global consistency, all within a single framework. Our approach outperforms state-of-the-art online systems with quality on par to offline methods, but with unprecedented speed and scan completeness. Our framework leads to a comprehensive online scanning solution for large indoor environments, enabling ease of use and high-quality results.
机译:实时,高质量,大规模场景的3D扫描是混合现实和机器人应用程序的关键。然而,可伸缩性带来了姿势估计漂移的挑战,在累积模型中引入了重大误差。方法通常需要数小时的脱机处理才能全局纠正模型错误。最近的在线方法显示出令人信服的结果,但存在以下缺点:(1)需要花费几分钟来执行在线校正,从而阻止了真正的实时使用; (2)帧到帧(或帧到模型)的姿势估计很脆弱,导致许多跟踪失败;或(3)仅支持非结构化的基于点的表示形式,这限制了扫描质量和适用性。我们通过新颖,实时,端到端的重建框架系统地解决了这些问题。其核心是鲁棒的姿态估计策略,通过使用高效的分层方法考虑RGB-D输入的完整历史记录,针对全球相机姿态的每帧进行优化。我们消除了对时间跟踪的高度依赖,而是不断地定位到全局优化的帧。我们提供了可并行化的优化框架,该框架采用基于稀疏特征以及密集的几何和光度匹配的对应关系。我们的方法可实时估算全局优化(即,调整捆包)的姿态,支持从总跟踪失败中恢复的稳健跟踪(即,重新定位),并实时重新估算3D模型,以确保全局一致性,所有这些都可以在一个单一的时间内完成框架。与离线方法相比,我们的方法在质量上优于最先进的在线系统,但其速度和扫描完整性却空前。我们的框架可为大型室内环境提供全面的在线扫描解决方案,从而使易用性和高质量结果成为可能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号