首页> 外文会议>IEEE International Conference on Computer Vision Workshops;ICCV Workshops >Multi-view image and ToF sensor fusion for dense 3D reconstruction
【24h】

Multi-view image and ToF sensor fusion for dense 3D reconstruction

机译:多视图图像和ToF传感器融合以实现密集3D重建

获取原文

摘要

Multi-view stereo methods frequently fail to properly reconstruct 3D scene geometry if visible texture is sparse or the scene exhibits difficult self-occlusions. Time-of-Flight (ToF) depth sensors can provide 3D information regardless of texture but with only limited resolution and accuracy. To find an optimal reconstruction, we propose an integrated multi-view sensor fusion approach that combines information from multiple color cameras and multiple ToF depth sensors. First, multi-view ToF sensor measurements are combined to obtain a coarse but complete model. Then, the initial model is refined by means of a probabilistic multi-view fusion framework, optimizing over an energy function that aggregates ToF depth sensor information with multi-view stereo and silhouette constraints. We obtain high quality dense and detailed 3D models of scenes challenging for stereo alone, while simultaneously reducing complex noise of ToF sensors.
机译:如果可见纹理稀疏或场景表现出困难的自我遮挡,则多视图立体方法经常无法正确重建3D场景几何。飞行时间(ToF)深度传感器可以提供3D信息,而与纹理无关,但分辨率和准确性有限。为了找到最佳的重建,我们提出了一种集成的多视图传感器融合方法,该方法将来自多个彩色相机和多个ToF深度传感器的信息结合在一起。首先,将多视图ToF传感器的测量结果结合起来,以获得一个粗略但完整的模型。然后,借助概率多视图融合框架优化初始模型,并在能量函数上进行优化,该能量函数将ToF深度传感器信息与多视图立体和轮廓约束聚合在一起。我们获得了高质量的密集且细致的场景3D模型,仅凭立体声就具有挑战性,同时还能降低ToF传感器的复杂噪声。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号