State-of-the-art methods for large-scale 3D reconstruction from RGB-D sensorsusually reduce drift in camera tracking by globally optimizing the estimatedcamera poses in real-time without simultaneously updating the reconstructedsurface on pose changes. We propose an efficient on-the-fly surface correctionmethod for globally consistent dense 3D reconstruction of large-scale scenes.Our approach uses a dense Visual RGB-D SLAM system that estimates the cameramotion in real-time on a CPU and refines it in a global pose graphoptimization. Consecutive RGB-D frames are locally fused into keyframes, whichare incorporated into a sparse voxel hashed Signed Distance Field (SDF) on theGPU. On pose graph updates, the SDF volume is corrected on-the-fly using anovel keyframe re-integration strategy with reduced GPU-host streaming. Wedemonstrate in an extensive quantitative evaluation that our method is up to93% more runtime efficient compared to the state-of-the-art and requiressignificantly less memory, with only negligible loss of surface quality.Overall, our system requires only a single GPU and allows for real-time surfacecorrection of large environments.
展开▼