首页> 外文期刊>Computational Imaging, IEEE Transactions on >Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching
【24h】

Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection Using Template Matching

机译:通过卷积神经网络生成显着图,用于使用模板匹配的实时交通信号灯检测

获取原文
获取原文并翻译 | 示例

摘要

A critical issue in autonomous vehicle navigation and advanced driver assistance systems (ADAS) is the accurate real-time detection of traffic lights. Typically, vision-based sensors are used to detect the traffic light. However, the detection of traffic lights using computer vision, image processing, and learning algorithms is not trivial. The challenges include appearance variations, illumination variations, and reduced appearance information in low illumination conditions. To address these challenges, we present a visual camera-based real-time traffic light detection algorithm, where we identify the spatially constrained region-of-interest in the image containing the traffic light. Given, the identified region-of-interest, we achieve high traffic light detection accuracy with few false positives, even in adverse environments. To perform robust traffic light detection in varying conditions with few false positives, the proposed algorithm consists of two steps, an offline saliency map generation and a real-time traffic light detection. In the offline step, a convolutional neural network, i.e., a deep learning framework, detects and recognizes the traffic lights in the image using region-of-interest information provided by an onboard GPS sensor. The detected traffic light information is then used to generate the saliency maps with a modified multidimensional density-based spatial clustering of applications with noise (M-DBSCAN) algorithm. The generated saliency maps are indexed using the vehicle GPS information. In the real-time step, traffic lights are detected by retrieving relevant saliency maps and performing template matching by using colour information. The proposed algorithm is validated with the datasets acquired in varying conditions and different countries, e.g., USA, Japan, and France. The experimental results report a high detection accuracy with negligible false positives under varied illumination conditions. More importantly, an average computational - ime of 10 ms/frame is achieved. A detailed parameter analysis is conducted and the observations are summarized and reported in this paper.
机译:自动驾驶导航和高级驾驶员辅助系统(ADAS)中的关键问题是交通信号灯的准确实时检测。通常,基于视觉的传感器用于检测交通信号灯。但是,使用计算机视觉,图像处理和学习算法检测交通信号灯并非易事。挑战包括外观变化,照明变化以及在低光照条件下减少外观信息。为了解决这些挑战,我们提出了一种基于视觉相机的实时交通信号灯检测算法,该算法在包含交通信号灯的图像中识别空间受限的关注区域。给定已识别的关注区域,即使在不利的环境中,我们也能实现极高的交通信号灯检测准确性,且误报率极低。为了在变化少的情况下进行鲁棒的交通信号灯检测,假阳性率极低,该算法包括两个步骤:离线显着性图生成和实时交通信号灯检测。在离线步骤中,卷积神经网络(即深度学习框架)使用车载GPS传感器提供的感兴趣区域信息来检测和识别图像中的交通信号灯。然后,将检测到的交通灯信息用于生成具有改进的基于多维密度的应用程序空间聚类的带噪声的显着性图(M-DBSCAN)算法。使用车辆GPS信息对生成的显着性图进行索引。在实时步骤中,通过检索相关的显着性图并使用颜色信息执行模板匹配来检测交通信号灯。所提出的算法通过在不同条件和不同国家(例如美国,日本和法国)获得的数据集进行了验证。实验结果表明,在变化的照明条件下,检测精度高,误报率可忽略不计。更重要的是,平均计算时间为10 ms /帧。本文进行了详细的参数分析,并对观察结果进行了总结和报告。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号