首页> 外文会议>Pacific Rim international conference on artificial intelligence >Deep Global-Relative Networks for End-to-End 6-DoF Visual Localization and Odometry
【24h】

Deep Global-Relative Networks for End-to-End 6-DoF Visual Localization and Odometry

机译:深度全局相对网络,用于端到端6-DOF视觉定位和径管

获取原文

摘要

Although a wide variety of deep neural networks for robust Visual Odometry (VO) can be found in the literature, they are still unable to solve the drift problem in long-term robot navigation. Thus, this paper aims to propose novel deep end-to-end networks for long-term 6-DoF VO task. It mainly fuses relative and global networks based on Recurrent Convolutional Neural Networks (RCNNs) to improve the monocular localization accuracy. Indeed, the relative sub-networks are implemented to smooth the VO trajectory, while global sub-networks are designed to avoid drift problem. All the parameters are jointly optimized using Cross Transformation Constraints (CTC), which represents temporal geometric consistency of the consecutive frames, and Mean Square Error (MSE) between the predicted pose and ground truth. The experimental results on both indoor and outdoor datasets show that our method outperforms other state-of-the-art learning-based VO methods in terms of pose accuracy.
机译:虽然可以在文献中找到各种深度神经网络(VO),但在文献中,它们仍然无法解决长期机器人导航中的漂移问题。因此,本文旨在提出用于长期6-DOF VO任务的新型深度端网络。它主要是基于经常性卷积神经网络(RCNN)的相对和全球网络,以提高单眼定位精度。实际上,实现相对子网以平滑VO轨迹,而全局子网被设计为避免漂移问题。所有参数都是使用交叉变换约束(CTC)联合优化,该参数表示连续帧的时间几何一致性,并且在预测的姿势和地面真理之间的均方误差(MSE)。在室内和室外数据集的实验结果表明,我们的方法在姿势精度方面优于其他基于最先进的学习的VO方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号