首页> 外文会议>IEEE Sensors >Sensor fused three-dimensional localization using IMU, camera and LiDAR
【24h】

Sensor fused three-dimensional localization using IMU, camera and LiDAR

机译:使用IMU,摄像头和LiDAR的传感器融合三维定位

获取原文

摘要

Estimating the position and orientation (pose) of a moving platform in a three-dimensional (3D) environment is of significant importance in many areas, such as robotics and sensing. In order to perform this task, one can employ single or multiple sensors. Multi-sensor fusion has been used to improve the accuracy of the estimation and to compensate for individual sensor deficiencies. Unlike the previous works in this area that use sensors with the ability of 3D localization to estimate the full pose of a platform (such as an unmanned aerial vehicle or drone), in this work we employ the data from a 2D light detection and ranging (LiDAR) sensor, which can only estimate the pose in a 2D plane. We fuse it in an extended Kalman filter with the data from camera and inertial sensors showing that, despite the incomplete estimation from the 2D LiDAR, the overall estimated 3D pose can be improved. We also compare this scenario with the case where the 2D LiDAR is replaced with a 3D LiDAR with similar characteristics, but the ability of complete 3D pose estimation.
机译:估计三维(3D)环境中移动平台的位置和方向(姿势)在许多领域具有重要意义,例如机器人和传感。为了执行此任务,可以使用单个或多个传感器。多传感器融合已被用于提高估计的准确性,并补偿单个传感器缺陷。与以前的工作不同,在该领域使用具有3D定位的能力来估计平台的完整姿势(例如无人驾驶飞行器或无人机),在这项工作中,我们使用来自2D光检测和测距的数据( LIDAR)传感器,其只能估计2D平面中的姿势。我们将其融合在扩展卡尔曼滤波器中,利用来自相机和惯性传感器的数据,表明,尽管从2D激光雷达估计不完整,但可以提高整体估计的3D姿势。我们还将这种情况与2D LIDAR用具有相似特征的3D LIDAR替换的情况进行比较,但完整的3D姿势估计的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号