首页> 外文期刊>International Journal of Computer Vision >DeepTAM: Deep Tracking and Mapping with Convolutional Neural Networks
【24h】

DeepTAM: Deep Tracking and Mapping with Convolutional Neural Networks

机译:DEEPTAM:与卷积神经网络的深度跟踪和映射

获取原文
获取原文并翻译 | 示例
           

摘要

We present a system for dense keyframe-based camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This formulation significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6?DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.
机译:我们为完全学习的密集关键帧的相机跟踪和深度映射估计提供了一个系统。为了跟踪,我们估计当前摄像机图像和合成视点之间的小姿势递增。该配方显着简化了学习问题,并减轻了相机运动的数据集偏差。此外,我们表明产生大量姿势假设导致更准确的预测。为了映射,我们以在当前深度估计的成本卷中累积的信息累积。然后,映射网络将成本卷和关键帧图像组合以更新深度预测,从而有效地利用深度测量和基于图像的前导者。我们的方法产生了少量图像的最先进的结果,并且对嘈杂的相机姿势具有鲁棒性。我们表明,我们的6个?DOF跟踪的表现与RGB-D跟踪算法竞争。我们比较有利于强大的经典和深度学习动力密集深度算法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号