首页> 外文会议>IEEE Intelligent Vehicles Symposium >Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving
【24h】

Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving

机译:在线摄像头LiDAR融合和混合数据目标检测的自动驾驶

获取原文

摘要

Environment perception for autonomous driving traditionally uses sensor fusion to combine the object detections from various sensors mounted on the car into a single representation of the environment. Non-calibrated sensors result in artifacts and aberration in the environment model, which makes tasks like free-space detection more challenging. In this study, we improve the LiDAR and camera fusion approach of Levinson and Thrun. We rely on intensity discontinuities and erosion and dilation of the edge image for increased robustness against shadows and visual patterns, which is a recurring problem in point cloud related work. Furthermore, we use a gradientfree optimizer instead of an exhaustive grid search to find the extrinsic calibration. Hence, our fusion pipeline is lightweight and able to run in real-time on a computer in the car. For the detection task, we modify the Faster R-CNN architecture to accommodate hybrid LiDAR-camera data for improved object detection and classification. We test our algorithms on the KITTI data set and locally collected urban scenarios. We also give an outlook on how radar can be added to the fusion pipeline via velocity matching.
机译:传统上,用于自动驾驶的环境感知使用传感器融合将来自安装在汽车上的各种传感器的物体检测组合到单个环境中。未经校准的传感器会导致环境模型中出现伪影和像差,这使诸如自由空间检测之类的任务更具挑战性。在这项研究中,我们改进了Levinson和Thrun的LiDAR和相机融合方法。我们依靠强度不连续性以及边缘图像的腐蚀和扩张来增强对阴影和视觉图案的鲁棒性,这是与点云相关的工作中反复出现的问题。此外,我们使用无梯度优化器代替详尽的网格搜索来查找外部校准。因此,我们的融合管道轻巧,能够在汽车中的计算机上实时运行。对于检测任务,我们修改了Faster R-CNN架构,以容纳混合LiDAR相机数据,以改善对象检测和分类。我们在KITTI数据集和本地收集的城市场景中测试了我们的算法。我们还展望了如何通过速度匹配将雷达添加到聚变管道中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号