...
首页> 外文期刊>International Journal of Advanced Robotic Systems >A visual simultaneous localization and mapping approach based on scene segmentation and incremental optimization
【24h】

A visual simultaneous localization and mapping approach based on scene segmentation and incremental optimization

机译:基于场景分割和增量优化的视觉同时定位和映射方法

获取原文
           

摘要

Existing visual simultaneous localization and mapping (V-SLAM) algorithms are usually sensitive to the situation with sparse landmarks in the environment and large view transformation of camera motion, and they are liable to generate large pose errors that lead to track failures due to the decrease of the matching rate of feature points. Aiming at the above problems, this article proposes an improved V-SLAM method based on scene segmentation and incremental optimization strategy. In the front end, this article proposes a scene segmentation algorithm considering camera motion direction and angle. By segmenting the trajectory and adding camera motion direction to the tracking thread, an effective prediction model of camera motion in the scene with sparse landmarks and large view transformation is realized. In the back end, this article proposes an incremental optimization method combining segmentation information and an optimization method for tracking prediction model. By incrementally adding the state parameters and reusing the computed results, high-precision results of the camera trajectory and feature points are obtained with satisfactory computing speed. The performance of our algorithm is evaluated by two well-known datasets: TUM RGB-D and NYUDv2 RGB-D. The experimental results demonstrate that our method improves the computational efficiency by 10.2% compared with state-of-the-art V-SLAMs on the desktop platform and by 22.4% on the embedded platform, respectively. Meanwhile, the robustness of our method is better than that of ORB-SLAM2 on the TUM RGB-D dataset.
机译:现有的视觉同步定位和映射(V-SLAM)算法通常对具有稀疏地标在环境中的稀疏地标和相机运动的变换的情况敏感,并且它们可能会产生大的姿势误差,导致由于减少而导致追踪故障的误差特征点的匹配率。针对上述问题,本文提出了一种基于场景分割和增量优化策略的改进的V-SLAM方法。在前端,本文提出了考虑相机运动方向和角度的场景分割算法。通过将轨迹和添加相机运动方向分割到跟踪线程,实现了具有稀疏地标和大视图变换的场景中的相机运动的有效预测模型。在后端,本文提出了一种组合分段信息的增量优化方法和用于跟踪预测模型的优化方法。通过递增地添加状态参数并重用计算的结果,通过令人满意的计算速度获得相机轨迹和特征点的高精度结果。我们的算法的性能由两个众所周知的数据集评估:Tum RGB-D和Nyudv2 RGB-D。实验结果表明,与桌面平台上的最先进的V-Slams相比,我们的方法将计算效率提高了10.2%,分别在嵌入式平台上乘以22.4%。同时,我们的方法的稳健性比Tum RGB-D数据集上的ORB-Slam2更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号