首页> 外文会议>European conference on computer vision >Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry
【24h】

Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry

机译:深度虚拟立体里程表:利用单眼直接稀疏里程表的深度预测

获取原文
获取外文期刊封面目录资料

摘要

Monocular visual odometry approaches that purely rely on geometric cues are prone to scale drift and require sufficient motion parallax in successive frames for motion estimation and 3D reconstruction. In this paper, we propose to leverage deep monocular depth prediction to overcome limitations of geometry-based monocular visual odometry. To this end, we incorporate deep depth predictions into Direct Sparse Odometry (DSO) as direct virtual stereo measurements. For depth prediction, we design a novel deep network that refines predicted depth from a single image in a two-stage process. We train our network in a semi-supervised way on photoconsistency in stereo images and on consistency with accurate sparse depth reconstructions from Stereo DSO. Our deep predictions excel state-of-the-art approaches for monocular depth on the KITTI benchmark. Moreover, our Deep Virtual Stereo Odometry clearly exceeds previous monocular and deep-learning based methods in accuracy. It even achieves comparable performance to the state-of-the-art stereo methods, while only relying on a single camera.
机译:仅依赖于几何线索的单眼视觉测距方法容易发生比例漂移,并且需要在连续的帧中进行足够的运动视差以进行运动估计和3D重建。在本文中,我们建议利用深度单眼深度预测来克服基于几何的单眼视觉测距法的局限性。为此,我们将深度深度预测合并到直接稀疏里程表(DSO)中,作为直接虚拟立体测量。对于深度预测,我们设计了一种新颖的深度网络,该网络可以在两个阶段的过程中从单个图像中细化预测深度。我们以半监督的方式对网络进行立体图像的光一致性和与立体声DSO精确稀疏深度重建的一致性方面的培训。我们的深入预测优于KITTI基准上最先进的单眼深度方法。此外,我们的“深度虚拟立体里程表”在准确性方面明显超过了以前基于单眼和深度学习的方法。它甚至仅依靠一台摄像机即可达到与最新的立体声方法相当的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号