首页> 外文会议>IEEE International Conference on Image Processing >A Unified Unsupervised Learning Framework for Stereo Matching and Ego-Motion Estimation
【24h】

A Unified Unsupervised Learning Framework for Stereo Matching and Ego-Motion Estimation

机译:立体声匹配和自我运动估计的统一无监督学习框架

获取原文

摘要

Learning to estimate depth and ego-motion from video sequences via deep convolutional networks is attracting significant attention for potentially wide computer vision applications. Most prior work in unsupervised depth learning use monocular video sequences as the input of their networks. However, their results need a scale factor that is computed frame-to-frame to maintain a stable relative scale. In this paper, we propose an unsupervised learning framework for the task of joint depth and ego-motion estimation from stereo sequences. The usage of stereo sequences can provide both spatial (left to right) and temporal (forward to back-ward) photometric warping constrains for supervised learning and allow for an absolute scale factor for the scene depth and camera pose, which is of great significance for vision guidance. Experiments on the KITTI driving dataset reveal that our framework outperforms state-of-the-art results employing unsupervised neural networks.
机译:通过深度卷积网络学习从视频序列估计深度和自我运动,这对于潜在的计算机视觉应用来说,吸引了显着的关注。在无监督深度学习中的大多数事先工作都使用单眼视频序列作为网络的输入。然而,它们的结果需要计算帧到帧的比例因子以保持稳定的相对比例。在本文中,我们为立体声序列的联合深度和自我运动估计的任务提出了无监督的学习框架。立体声序列的使用可以为监督学习提供空间(左右)和时间(前向后的返回病房)光度翘曲约束,并允许场景深度和相机姿势的绝对比例因子,这对其具有重要意义愿景指导。基提驾驶数据集的实验表明,我们的框架优于雇用无监督的神经网络的最先进结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号