首页> 美国卫生研究院文献>Sensors (Basel Switzerland) >Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor
【2h】

Large-Scale Place Recognition Based on Camera-LiDAR Fused Descriptor

机译:基于Camera-LiDAR融合描述子的大规模位置识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In the field of autonomous driving, carriers are equipped with a variety of sensors, including cameras and LiDARs. However, the camera suffers from problems of illumination and occlusion, and the LiDAR encounters motion distortion, degenerate environment and limited ranging distance. Therefore, fusing the information from these two sensors deserves to be explored. In this paper, we propose a fusion network which robustly captures both the image and point cloud descriptors to solve the place recognition problem. Our contribution can be summarized as: (1) applying the trimmed strategy in the point cloud global feature aggregation to improve the recognition performance, (2) building a compact fusion framework which captures both the robust representation of the image and 3D point cloud, and (3) learning a proper metric to describe the similarity of our fused global feature. The experiments on KITTI and KAIST datasets show that the proposed fused descriptor is more robust and discriminative than the single sensor descriptor.
机译:在自动驾驶领域,运营商配备了各种传感器,包括摄像头和LiDAR。然而,照相机存在照明和遮挡的问题,并且LiDAR遇到运动失真,退化的环境和有限的测距距离。因此,融合来自这两个传感器的信息值得探索。在本文中,我们提出了一种融合网络,该网络能够可靠地捕获图像和点云描述符,以解决位置识别问题。我们的贡献可以概括为:(1)在点云全局特征聚合中应用修剪策略以提高识别性能,(2)构建紧凑的融合框架,该框架既可以捕获图像的强大表示,又可以捕获3D点云,并且(3)学习适当的度量来描述我们融合的全局特征的相似性。在KITTI和KAIST数据集上进行的实验表明,所提出的融合描述符比单个传感器描述符更具鲁棒性和判别力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号