首页> 外文会议>Conference on Automatic Target Recognition >Combining Visible and Infrared Spectrum Imagery using Machine Learning for Small Unmanned Aerial System Detection
【24h】

Combining Visible and Infrared Spectrum Imagery using Machine Learning for Small Unmanned Aerial System Detection

机译:使用机器学习将可见光谱和红外光谱图像相结合以进行小型无人航空系统检测

获取原文

摘要

There is an increasing demand for technology and solutions to counter commercial, off-the-shelf small unmanned aerial systems (sUAS). Advances in machine learning and deep neural networks for object detection, coupled with lower cost and power requirements of cameras, led to promising vision-based solutions for sUAS detection. However, solely relying on the visible spectrum has previously led to reliability issues in low contrast scenarios such as sUAS flying below the treeline and against bright sources of light. Alternatively, due to the relatively high heat signatures emitted from sUAS during flight, a long-wave infrared (LWIR) sensor is able to produce images that clearly contrast the sUAS from its background. However, compared to widely available visible spectrum sensors, LWIR sensors have lower resolution and may produce more false positives when exposed to birds or other heat sources. This research work proposes combining the advantages of the LWIR and visible spectrum sensors using machine learning for vision-based detection of sUAS. Utilizing the heightened background contrast from the LWIR sensor combined and synchronized with the relatively increased resolution of the visible spectrum sensor, a deep learning model was trained to detect the sUAS through previously difficult environments. More specifically, the approach demonstrated effective detection of multiple sUAS flying above and below the treeline, in the presence of heat sources, and glare from the sun. Our approach achieved a detection rate of 71.2 ± 8.3%, improving by 69% when compared to LWIR and by 30.4% when visible spectrum alone, and achieved false alarm rate of 2.7 ± 2.6%, decreasing by 74.1% and by 47.1% when compared to LWIR and visible spectrum alone, respectively, on average, for single and multiple drone scenarios, controlled for the same confidence metric of the machine learning object detector of at least 50%. With a network of these small and affordable sensors, one can accurately estimate the 3D position of the sUAS, which could then be used for elimination or further localization from more narrow sensors, like a fire-control radar (FCR). Videos of the solution's performance can be seen at https: //sites.google. com/view/tamudrone-spie2020/.
机译:人们对技术和解决方案的需求不断增长,以应对现成的商用小型无人机系统(sUAS)。机器学习和用于对象检测的深层神经网络的进步,加上对照相机的较低成本和功率要求,导致了有希望的基于视觉的sUAS检测解决方案。但是,仅依靠可见光谱以前在低对比度情况下会导致可靠性问题,例如sUAS在树线以下飞行并与明亮的光源发生冲突。或者,由于在飞行过程中sUAS发出的热量较高,因此长波红外(LWIR)传感器能够产生与sUAS与其背景形成鲜明对比的图像。但是,与广泛使用的可见光谱传感器相比,LWIR传感器具有较低的分辨率,并且在暴露于鸟类或其他热源时可能会产生更多的误报。这项研究工作建议结合LWIR和可见光谱传感器的优势,使用机器学习对基于视觉的sUAS进行检测。利用来自LWIR传感器的增强背景对比度,并与可见光谱传感器的相对提高的分辨率同步,可以对深度学习模型进行训练,以通过先前困难的环境检测sUAS。更具体地说,该方法证明了在存在热源和太阳眩光的情况下,可以有效检测到多个在树线上方和下方飞行的sUAS。我们的方法实现了71.2±8.3%的检测率,与LWIR相比提高了69%,在单独的可见光谱下提高了30.4%,并实现了2.7±2.6%的误报率,与75%相比降低了74.1%和47.1%对于单个和多个无人机场景,平均分别单独控制LWIR和可见光谱,控制的机器学习目标检测器的置信度至少为50%。借助由这些小型且价格合理的传感器组成的网络,人们可以准确估算sUAS的3D位置,然后将其用于消除或进一步定位更窄的传感器(如火控雷达(FCR))。有关该解决方案性能的视频可以在https://sites.google上看到。 com / view / tamudrone-spie2020 /。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号