首页> 外文会议>International Conference on Advanced Electronic Materials, Computers and Software Engineering >Research on mapping method based on data fusion of lidar and depth camera
【24h】

Research on mapping method based on data fusion of lidar and depth camera

机译:基于LIDAR和深度相机数据融合的映射方法研究

获取原文

摘要

At present, mobile robots equipped with a single sensor in an indoor environment suffer from insufficient mapping accuracy and limited scanning range. Therefore, this paper proposes a method for data fusion mapping with three sensors, single-line lidar, depth camera and IMU. First, the depth data collected by the depth camera is reduced by dimensionality processing to make it two-dimensional, and then the environmental feature data scanned in the lidar, the pseudo laser data of the depth camera, and the pose information collected and calculated by the IMU are passed through Kalman. Filtering performs data fusion, that is, compensating the position and attitude errors caused by the lidar measurement. In the mapping phase, under the open source SLAM algorithm Gmapping, the two-dimensional local raster maps generated by the lidar that fuse multi-source data and the depth camera are merged with the local map based on the Bayes rule. Experiments show that the global map obtained by this method contains richer environmental information than a single sensor, which improves the accuracy of map construction and is beneficial to subsequent navigation obstacle avoidance research.
机译:目前,在室内环境中配备有单个传感器的移动机器人遭受映射精度不足和有限的扫描范围。因此,本文提出了一种用三个传感器,单行激光雷达,深度相机和IMU进行数据融合映射的方法。首先,由深度摄像机收集的深度数据通过维度处理来减少,以使其二维,然后在LIDAR中扫描的环境特征数据,深度摄像机的伪激光数据以及收集和计算的姿势信息IMU通过卡尔曼传递。过滤执行数据融合,即补偿LIDAR测量引起的位置和姿态误差。在映射阶段,在开源SLAM算法GMAPPED下,LIDAR生成的二维本地栅格映射熔断多源数据和深度相机的映射与基于贝叶斯规则的本地地图合并。实验表明,通过该方法获得的全球地图包含比单个传感器更丰富的环境信息,这提高了地图结构的准确性,并且有利于随后导航避免研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号