首页> 外文期刊>Sensors >Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems
【24h】

Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems

机译:智能车辆系统中基于决策级融合的目标检测与分类

获取原文
       

摘要

To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 ???? 370 image, whereas the original selective search method extracted approximately 10 6 ???? n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.
机译:为了有效地理解驾驶环境,重要的是要实现对基于传感器的智能车辆系统检测到的物体的准确检测和分类。执行对象检测以实现对象的定位,而对象分类则从检测到的对象区域中识别对象类别。为了进行精确的对象检测和分类,必须将多个传感器信息融合到表示和感知过程的关键组件中。本文提出了一种新的基于决策级融合的目标检测与分类方法。我们使用卷积神经网络(CNN)融合来自独立一元分类器的分类输出,例如3D点云和图像数据。两个传感器的一元分类器是具有五层的CNN,它们使用两个以上的预训练卷积层来将局部或全局特征视为数据表示。为了使用卷积层表示数据,我们将感兴趣区域(ROI)池应用于使用对象提议生成生成的对象候选区域上每一层的输出,以实现电荷耦合设备和光检测和测距( LiDAR)传感器。我们在KITTI基准数据集上评估了我们提出的方法,以检测和分类三个对象类别:汽车,行人和骑自行车的人。评估结果表明,所提出的方法比以前的方法具有更好的性能。我们提出的方法在1226上提取了大约500个建议。 370张图片,而原始的选择性搜索方法大约提取了10 6 ???? n个提案。在KITTI基准数据集的中等检测水平下,我们在整个类别中获得的分类平均性能为77.72%。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号