首页> 外文期刊>The Visual Computer >Real-time 3D scene reconstruction with dynamically moving object using a single depth camera
【24h】

Real-time 3D scene reconstruction with dynamically moving object using a single depth camera

机译:使用单深度摄像头动态移动物体进行实时3D场景重建

获取原文
获取原文并翻译 | 示例

摘要

Online 3D reconstruction of real-world scenes has been attracting increasing interests from both the academia and industry, especially with the consumer-level depth cameras becoming widely available. Recent most online reconstruction systems take live depth data from a moving Kinect camera and incrementally fuse them to a single high-quality 3D model in real time. Although most real-world scenes have static environment, the daily objects in a scene often move dynamically, which are non-trivial to reconstruct especially when the camera is also not still. To solve this problem, we propose a single depth camera-based real-time approach for simultaneous reconstruction of dynamic object and static environment, and provide solutions for its key issues. In particular, we first introduce a robust optimization scheme which takes advantage of raycasted maps to segment moving object and background from the live depth map. The corresponding depth data are then fused to the volumes, respectively. These volumes are raycasted to extract views of the implicit surface which can be used as a consistent reference frame for the next iteration of segmentation and tracking. Particularly, in order to handle fast motion of dynamic object and handheld camera in the fusion stage, we propose a sequential 6D pose prediction method which largely increases the registration robustness and avoids registration failures occurred in conventional methods. Experimental results show that our approach can reconstruct moving object as well as static environment with rich details, and outperform conventional methods in multiple aspects.
机译:现实世界场景的在线3D重建已经引起了学术界和业界的越来越多的关注,尤其是随着消费者级别的深度相机的广泛使用。最近,大多数在线重建系统都从移动的Kinect摄像机中获取实时深度数据,并将其逐步融合到一个实时的高质量3D模型中。尽管大多数现实世界场景都具有静态环境,但是场景中的日常对象经常动态移动,因此重建起来并不容易,尤其是在照相机也不静止的情况下。为了解决这个问题,我们提出了一种基于单深度相机的实时方法来同时重建动态对象和静态环境,并为其关键问题提供了解决方案。特别是,我们首先介绍了一种鲁棒的优化方案,该方案利用光线投射图来从实时深度图中分割运动对象和背景。然后将相应的深度数据分别融合到体积上。对这些体积进行光线投射,以提取隐式表面的视图,这些视图可用作分割和跟踪的下一次迭代的一致参考系。特别地,为了在融合阶段处理动态物体和手持式摄像机的快速运动,我们提出了一种顺序6D姿态预测方法,该方法可以大大提高配准的鲁棒性并避免传统方法中出现的配准失败。实验结果表明,该方法可以重建运动物体以及具有丰富细节的静态环境,在多个方面都优于传统方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号