首页> 外文期刊>Proceedings of the Institution of Mechanical Engineers, Part D. Journal of Automobile Engineering >Low-observable targets detection for autonomous vehicles based on dual-modal sensor fusion with deep learning approach
【24h】

Low-observable targets detection for autonomous vehicles based on dual-modal sensor fusion with deep learning approach

机译:基于双模态传感器融合的深度学习方法的自主车辆的低观察目标检测

获取原文
获取原文并翻译 | 示例
           

摘要

Environment perception is a basic and necessary technology for autonomous vehicles to ensure safety and reliable driving. A lot of studies have focused on the ideal environment, while much less work has been done on the perception of low-observable targets, features of which may not be obvious in a complex environment. However, it is inevitable for autonomous vehicles to drive in environmental conditions such as rain, snow and night-time, during which the features of the targets are not obvious and detection models trained by images with significant features fail to detect low-observable target. This article mainly studies the efficient and intelligent recognition algorithm of low-observable targets in complex environments, focuses on the development of engineering method to dual-modal image (color-infrared images) low-observable target recognition and explores the applications of infrared imaging and color imaging for an intelligent perception system in autonomous vehicles. A dual-modal deep neural network is established to fuse the color and infrared images and detect low-observable targets in dual-modal images. A manually labeled color-infrared image dataset of low-observable targets is built. The deep learning neural network is trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in complex environments. The experimental results indicate that the dual-modal deep neural network has a better performance on the low-observable target detection and recognition in complex environments than traditional methods.
机译:环境感知是自动车辆的基本和必要技术,以确保安全可靠的驾驶。许多研究都集中在理想的环境中,而在对低可观察目标的感知中取得了更少的工作,其特征在复杂的环境中可能并不明显。然而,自主车辆是不可避免的,在雨,雪和夜间等环境条件下驾驶,在此期间,目标的特征是不明显的,并且通过具有重要特征的图像训练的检测模型无法检测到低观察目标。本文主要研究复杂环境中的低可观察目标的有效和智能识别算法,侧重于将工程方法的开发到双模态图像(色红外图像)低可观察目标识别并探讨红外成像的应用和自动车辆中智能感知系统的彩色成像。建立双模深神经网络以使颜色和红外图像熔断并检测双模图像中的低可观察目标。构建了一个手动标记的低可观察目标的颜色红外图像数据集。深度学习神经网络训练,以优化内部参数,使系统能够在复杂环境中进行行人和车辆识别。实验结果表明,双模型深神经网络在复杂环境中的低可观察目标检测和识别方面具有比传统方法更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号