首页> 外文期刊>Image and Vision Computing >Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?
【24h】

Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?

机译:预测性单眼里程表(PMO):如果没有RANSAC和多帧束调整,怎么办?

获取原文
获取原文并翻译 | 示例

摘要

Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC. (C) 2017 Published by Elsevier B.V.
机译:仅使用单眼相机的视觉测距法要比立体测距法面临更多的算法挑战。我们为汽车应用提出了一个强大的单眼视觉测距框架。提出了一种扩展的基于传播的跟踪框架,该框架可产生高度准确的(未缩放)姿势估计。通过使用卷积神经网络(CNN)使用街道像素标记的地平面姿态估计来提供比例。所提出的框架已在KITTI数据集上进行了广泛的测试,并且比KITTI里程表基准中当前发布的最新单眼方法的等级更高。与其他VO / SLAM方法不同,此结果是在没有闭环机制,没有RANSAC以及没有多帧束调整的情况下获得的。因此,我们挑战一个普遍的信念,即只能使用RANSAC之类的迭代鲁棒性工具来构建鲁棒性系统。 (C)2017由Elsevier B.V.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号