首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning
【24h】

Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning

机译:通过协同使用深度学习和3D点云功能(来自超高分辨率的倾斜航拍图像和多核学习)来进行灾害破坏检测

获取原文
获取原文并翻译 | 示例
       

摘要

Oblique aerial images offer views of both building roofs and facades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use. (C) 2017 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
机译:倾斜的航空影像可以同时看到建筑物的屋顶和外墙,因此被认为是检测由地震等破坏性灾难事件造成的严重建筑物损坏的潜在来源。因此,对于灾难后响应过程中的第一响应者或其他利益相关者而言,它们代表了重要的信息来源。已经证明了几种基于监督学习的自动化方法,可以使用倾斜的机载图像进行损伤检测。但是,当需要处理来自新的看不见站点的数据时,它们通常不能很好地概括化,从而妨碍了它们的实际使用。这种限制的原因包括图像和场景特征,尽管最突出的原因与用于训练分类器的图像特征有关。最近发现基于深度学习方法的特征(例如卷积神经网络(CNN))比传统的手工特征更有效,并且已成为包括遥感在内的许多领域的最新技术。此外,通常会以较高的块重叠率捕获倾斜图像,从而有助于生成密集的3D点云-导出几何特征的理想来源。我们假设,独立或与3D点云功能结合使用CNN功能,可以提高损坏检测的性能。为此,我们独立地和组合地使用了CNN和3D功能,使用了来自多个地理位置的有人和无人空中平台的图像,这些图像和场景特征差异很大。多内核学习框架是一种集成来自不同模态的特征的有效方法,用于组合两组特征进行分类。结果令人鼓舞:尽管CNN要素的平均分类准确度约为91%,但3D点云要素的集成带来了约3%的额外改进(即平均分类准确度为94%)。 3D点云功能的重要性在模型可传递性场景中变得更加明显(即,训练和测试来自上述特征略有不同的不同站点的样本),其中CNN和3D点云功能的集成显着提高了模型可传递性的准确性与仅CNN功能获得的准确性相比,最高可达7%。总体而言,在所有实验中,模型可转移性方案的平均准确度均达到85%。我们的主要结论是,这种方法符合实际使用条件。 (C)2017国际摄影测量与遥感学会(ISPRS)。由Elsevier B.V.发布。保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号