首页> 外文会议>IEEE International Conference on Real-time Computing and Robotics >Three-Dimensional Real-Time Object Perception Based on a 16-Beam LiDAR for an Autonomous Driving Car
【24h】

Three-Dimensional Real-Time Object Perception Based on a 16-Beam LiDAR for an Autonomous Driving Car

机译:基于16梁激光器的三维实时对象感知自动驾驶汽车

获取原文

摘要

Object perception is essential for autonomous driving applications in urban environment. A 64-beam LiDAR is a widely-used solution in this field, but its high price has prevented it from broader applications of autonomous driving technology. An alternative solution is to adopt a 16-beam LiDAR or multiple 16-beam LiDARs. However, 16-beam LiDAR obtains relative sparse data that makes object perception more challenging. In this paper, a new perception method is proposed to tackle problems caused by sparse data obtained from a 16- beam LiDAR. First, a segmentation method is proposed based on 2D grid image where a free space constraint is employed to reduce unreasonable image dilation and some segments are merged based on prior knowledge. Then, selective features of bounding box are employed in association process for a more accurate result given the sparse data. The proposed method is evaluated on an autonomous driving car in real urban scenarios. The results show that segmentation error can be as low as 7.7% with the free space constraint and prior knowledge, and absolute tracking error and the overall classification accuracy are 0.44 m/s and 93.33 % respectively.
机译:对象感知对于城市环境中的自主驾驶应用是必不可少的。一个64梁利达是该领域的广泛使用的解决方案,但它的高价格阻止了自动驾驶技术的更广泛应用。替代解决方案是采用16梁激光雷达或多个16梁楣。然而,16梁激光雷达获得相对稀疏数据,使对象感知更具挑战性。在本文中,提出了一种新的感知方法来解决由16梁立线雷达获得的稀疏数据引起的问题。首先,基于2D网格图像提出了一种分割方法,其中采用自由空间约束来减少不合理的图像扩张,并且基于先验知识来合并一些段。然后,在给定稀疏数据的情况下,在关联过程中使用边界框的选择性特征。所提出的方法是在真正城市情景中的自主驾驶汽车上进行评估。结果表明,随着自由空间约束和先验知识,分割误差可以低至7.7%,绝对跟踪误差和整体分类精度分别为0.44米/秒和93.33%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号