首页> 外文期刊>IEEE robotics and automation letters >mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments
【24h】

mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments

机译:mVIL-Fusion: Monocular Visual-Inertial-LiDAR Simultaneous Localization and Mapping in Challenging Environments

获取原文
获取原文并翻译 | 示例
       

摘要

We propose mVIL-Fusion, a three-level multisensor fusion system that is able to achieve robust state estimation and globally consistent mapping in perceptually degraded environments. First, LiDAR depth-assisted visual-inertial odometry (VIO) with LiDAR odometry (LO) synchronous prediction and distortion correction functions is proposed as the frontend of our system. Second, a novel double-sliding-window-based optimization of midend joints of LiDAR scan-to-scan translation constraints (VIO status detection function) and scan-to-map rotation constraints (local mapping function) is used to enhance the accuracy and robustness of the state estimation. In the backend, loop closures of local-map-based keyframes are identified with altitude verification, and the global map is generated by incremental smoothing of a pose-only factor graph with altitude prior. The performance of our system is verified on both a public dataset and several self-collected sequences in challenging environments. To benefit the robotics community, our implementation is available at https://github.com/Stan994265/mVIL-Fusion .

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号