【24h】

Raw fusion of camera and sparse LiDAR for detecting distant objects

机译:用于检测远处物体的相机和稀疏激光器的原始融合

获取原文
获取原文并翻译 | 示例
           

摘要

Environment perception plays a significant role in autonomous driving since all traffic participants in the vehicle’s surroundings must be reliably recognized and localized in order to take any subsequent action. The main goal of this paper is to present a neural network approach for fusing camera images and LiDAR point clouds in order to detect traffic participants in the vehicle’s surroundings more reliably. Our approach primarily addresses the problem of sparse LiDAR data (point clouds of distant objects), where due to sparsity the point cloud based detection might become ambiguous. In the proposed model each 3D point in the LiDAR point cloud is augmented by semantically strong image features allowing us to inject additional information for the network to learn from. Experimental results show that our method increases the number of correctly detected 3D bounding boxes in sparse point clouds by at least 13–21?% and thus raw sensor fusion is validated as a viable approach for enhancing autonomous driving safety in difficult sensory conditions.
机译:环境感知在自动驾驶中起着重要作用,因为车辆周围环境中的所有交通参与者必须可靠地识别和本地化,以便采取任何后续的动作。本文的主要目的是提供一种用于融合相机图像和LIDAR点云的神经网络方法,以便更可靠地检测车辆周围环境中的交通参与者。我们的方法主要解决了稀疏激光雷达数据的问题(远端物体点云),由于稀疏性,基于点云的检测可能变得模糊。在所提出的模型中,通过语义强的图像特征来增强LIDAR点云中的每个3D点,允许我们注入网络以学习的其他信息。实验结果表明,我们的方法增加了稀疏点云​​中正确检测到的3D边界框的数量至少13-21倍,因此原始传感器融合被验证为可在困难的感觉条件下提高自主驱动安全的可行方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号