首页> 外文期刊>IEEE Transactions on Industrial Electronics >Robust Vision-Based Relative-Localization Approach Using an RGB-Depth Camera and LiDAR Sensor Fusion
【24h】

Robust Vision-Based Relative-Localization Approach Using an RGB-Depth Camera and LiDAR Sensor Fusion

机译:基于RGB深度摄像头和LiDAR传感器融合的基于视觉的稳健相对定位方法

获取原文
获取原文并翻译 | 示例
       

摘要

This paper describes a robust vision-based relative-localization approach for a moving target based on an RGB-depth (RGB-D) camera and sensor measurements from two-dimensional (2-D) light detection and ranging (LiDAR). With the proposed approach, a target’s three-dimensional (3-D) and 2-D position information is measured with an RGB-D camera and LiDAR sensor, respectively, to find the location of a target by incorporating visual-tracking algorithms, depth information of the structured light sensor, and a low-level vision-LiDAR fusion algorithm, e.g., extrinsic calibration. To produce 2-D location measurements, both visual- and depth-tracking approaches are introduced, utilizing an adaptive color-based particle filter (ACPF) (for visual tracking) and an interacting multiple-model (IMM) estimator with intermittent observations from depth-image segmentation (for depth image tracking). The 2-D LiDAR data enhance location measurements by replacing results from both visual and depth tracking; through this procedure, multiple LiDAR location measurements for a target are generated. To deal with these multiple-location measurements, we propose a modified track-to-track fusion scheme. The proposed approach shows robust localization results, even when one of the trackers fails. The proposed approach was compared to position data from a Vicon motion-capture system as the ground truth. The results of this evaluation demonstrate the superiority and robustness of the proposed approach.
机译:本文介绍了基于RGB深度(RGB-D)相机和来自二维(2-D)光检测和测距(LiDAR)的传感器测量结果的针对移动目标的基于视觉的鲁棒相对定位方法。通过提出的方法,分别使用RGB-D相机和LiDAR传感器测量目标的三维(3-D)和2-D位置信息,以通过结合视觉跟踪算法,深度来找到目标的位置结构化光传感器的信息以及低级视觉-LiDAR融合算法(例如,外部校准)。为了产生二维位置测量值,引入了视觉和深度跟踪方法,它们利用自适应的基于颜色的粒子滤波(ACPF)(用于视觉跟踪)和具有交互作用的多模型(IMM)估计器,并具有从深度的间歇性观察-图像分割(用于深度图像跟踪)。二维LiDAR数据通过替换视觉和深度跟踪的结果来增强位置测量;通过此过程,将生成针对目标的多个LiDAR位置测量。为了处理这些多位置测量,我们提出了一种改进的轨道间融合方案。所提出的方法显示出可靠的定位结果,即使其中一个跟踪器发生故障也是如此。将拟议的方法与来自Vicon运动捕捉系统的位置数据作了比较,作为基本事实。评估结果证明了该方法的优越性和鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号