...
首页> 外文期刊>Advanced engineering informatics >Convolutional neural networks for object detection in aerial imagery for disaster response and recovery
【24h】

Convolutional neural networks for object detection in aerial imagery for disaster response and recovery

机译:卷积神经网络用于航空影像中的目标检测,以进行灾难响应和恢复

获取原文
获取原文并翻译 | 示例

摘要

Accurate and timely access to data describing disaster impact and extent of damage is key to successful disaster management (a process that includes prevention, mitigation, preparedness, response, and recovery). Airborne data acquisition using helicopter and unmanned aerial vehicle (UAV) helps obtain a bird's-eye view of disaster-affected areas. However, a major challenge to this approach is robustly processing a large amount of data to identify and map objects of interest on the ground in real-time. The current process is resource-intensive (must be carried out manually) and requires offline computing (through post-processing of aerial videos). This research introduces and evaluates a series of convolutional neural network (CNN) models for ground object detection from aerial views of disaster's aftermath. These models are capable of recognizing critical ground assets including building roofs (both damaged and undamaged), vehicles, vegetation, debris, and flooded areas. The CNN models are trained on an in-house aerial video dataset (named Volan2018) that is created using web mining techniques. Volan2018 contains eight annotated aerial videos (65,580 frames) collected by drone or helicopter from eight different locations in various hurricanes that struck the United States in 2017-2018. Eight CNN models based on You-Only-Look-Once (YOLO) algorithm are trained by transfer learning, i.e., pre-trained on the COCO/VOC dataset and re-trained on Volan2018 dataset, and achieve 80.69% mAP for high altitude (helicopter footage) and 74.48% for low altitude (drone footage), respectively. This paper also presents a thorough investigation of the effect of camera altitude, data balance, and pre-trained weights on model performance, and finds that models trained and tested on videos taken from similar altitude outperform those trained and tested on videos taken from different altitudes. Moreover, the CNN model pre-trained on the VOC dataset and re-trained on balanced drone video yields the best result in significantly shorter training time.
机译:准确,及时地访问描述灾难影响和破坏程度的数据是成功进行灾难管理(包括预防,缓解,准备,响应和恢复的过程)的关键。使用直升机和无人驾驶飞机(UAV)进行的机载数据采集有助于获得受灾地区的鸟瞰图。但是,这种方法的主要挑战是要可靠地处理大量数据,以实时识别和映射地面上的感兴趣对象。当前的过程是资源密集型(必须手动执行),并且需要脱机计算(通过对航拍视频进行后处理)。这项研究从灾难后果的鸟瞰图介绍并评估了一系列用于地面物体检测的卷积神经网络(CNN)模型。这些模型能够识别关键的地面资产,包括建筑物屋顶(损坏和未损坏的房屋),车辆,植被,碎片和水灾地区。在使用网络挖掘技术创建的内部航拍视频数据集(名为Volan2018)上训练了CNN模型。 Volan2018包含由无人机或直升机从2017-2018年袭击美国的不同飓风的八个不同地点收集的八个带注释的航拍视频(65,580帧)。通过转移学习对八个基于You-Look-Once-Once(YOLO)算法的CNN模型进行了训练,即在COCO / VOC数据集上进行了预训练,并在Volan2018数据集上进行了重新训练,在高海拔地区实现了80.69%的mAP(直升机镜头)和低海拔(无人驾驶镜头)的74.48%。本文还对摄像机高度,数据平衡和预训练权重对模型性能的影响进行了全面研究,发现在相似海拔高度的视频上训练和测试的模型优于在不同海拔高度的视频上训练和测试的模型。此外,在VOC数据集上进行预训练并在平衡的无人机视频上进行再训练的CNN模型在明显缩短训练时间的情况下产生了最佳结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号