...
首页> 外文期刊>International Journal of Advanced Robotic Systems >Semantic segmentationa??aided visual odometry for urban autonomous driving:
【24h】

Semantic segmentationa??aided visual odometry for urban autonomous driving:

机译:用于城市自动驾驶的语义分割和辅助视觉里程表:

获取原文
           

摘要

Visual odometry plays an important role in urban autonomous driving cars. Feature-based visual odometry methods sample the candidates randomly from all available feature points, while alignment-based visual odometry methods take all pixels into account. These methods hold an assumption that quantitative majority of candidate visual cues could represent the truth of motions. But in real urban traffic scenes, this assumption could be broken by lots of dynamic traffic participants. Big trucks or buses may occupy the main image parts of a front-view monocular camera and result in wrong visual odometry estimation. Finding available visual cues that could represent real motion is the most important and hardest step for visual odometry in the dynamic environment. Semantic attributes of pixels could be considered as a more reasonable factor for candidate selection in that case. This article analyzed the availability of all visual cues with the help of pixel-level semantic information and proposed a new visual odometry method that combines feature-based and alignment-based visual odometry methods with one optimization pipeline. The proposed method was compared with three open-source visual odometry algorithms on Kitti benchmark data sets and our own data set. Experimental results confirmed that the new approach provided effective improvement both on accurate and robustness in the complex dynamic scenes.
机译:视觉里程表在城市自动驾驶汽车中起着重要作用。基于特征的视觉测距方法会从所有可用的特征点中随机采样候选对象,而基于路线的视觉测距方法会考虑所有像素。这些方法假设,大多数候选视觉线索都可以代表运动的真相。但是在实际的城市交通场景中,许多动态交通参与者可能会打破这一假设。大型卡车或公共汽车可能会占据前视单眼相机的主要图像部分,并导致错误的视觉里程计估计。在动态环境中,寻找可表示真实运动的视觉线索是视觉测距法中最重要也是最难的一步。在这种情况下,像素的语义属性可以被视为候选选择的更合理因素。本文借助像素级语义信息分析了所有视觉提示的可用性,并提出了一种新的视觉里程计方法,该方法将基于特征和基于路线的视觉里程计方法与一个优化流水线相结合。将该方法与Kitti基准数据集和我们自己的数据集上的三种开源视觉里程计算法进行了比较。实验结果证实,该新方法在复杂动态场景中提供了准确度和鲁棒性方面的有效改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号