首页> 外文期刊>Remote Sensing >Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach
【24h】

Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach

机译:视觉文字袋方法识别机载倾斜图像中的结构损坏区域

获取原文
       

摘要

Automatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform, sensor-view, image scale, and scene complexity. Therefore, a generalized method for damage detection that is impervious to the mentioned image characteristics is desirable. This study aims to develop a method to perform grid-level damage classification of remote sensing images by detecting the damage corresponding to debris, rubble piles, and heavy spalling within a defined grid, regardless of the aforementioned image characteristics. The Visual-Bag-of-Words (BoW) is one of the most widely used and proven frameworks for image classification in the field of computer vision. The framework adopts a kind of feature representation strategy that has been shown to be more efficient for image classification—regardless of the scale and clutter—than conventional global feature representations. In this study supervised models using various radiometric descriptors (histogram of gradient orientations (HoG) and Gabor wavelets) and classifiers (SVM, Random Forests, and Adaboost) were developed for damage classification based on both BoW and conventional global feature representations, and tested with four datasets. Those vary according to the aforementioned image characteristics. The BoW framework outperformed conventional global feature representation approaches in all scenarios ( i.e. , for all combinations of feature descriptors, classifiers, and datasets), and produced an average accuracy of approximately 90%. Particularly encouraging was an accuracy improvement by 14% (from 77% to 91%) produced by BoW over global representation for the most complex dataset, which was used to test the generalization capability.
机译:使用遥感图像对建筑物破坏进行自动灾后制图是灾害管理的重要且时间紧迫的要素。灾难发生后立即可用的遥感图像的特性尚不确定,因为它们在捕获平台,传感器视野,图像比例和场景复杂性方面可能会有所不同。因此,期望一种不受上述图像特性影响的用于损伤检测的通用方法。这项研究旨在开发一种方法,通过检测与定义的网格内的碎屑,碎石堆和重剥落相对应的损伤,而不管上述图像特性如何,从而对遥感图像进行网格级损伤分类。视觉单词袋(BoW)是计算机视觉领域中最广泛使用和公认的图像分类框架之一。该框架采用了一种特征表示策略,与传统的全局特征表示相比,无论尺寸和混乱程度如何,该策略都被证明对图像分类更为有效。在这项研究中,开发了使用各种辐射描述符(梯度方向直方图(HoG)和Gabor小波)和分类器(SVM,Random Forests和Adaboost)的监督模型,用于基于BoW和常规全局特征表示的损伤分类,并进行了测试四个数据集。这些根据上述图像特性而变化。在所有情况下(即对于特征描述符,分类器和数据集的所有组合),BoW框架的性能均优于传统的全局特征表示方法,并且平均准确性约为90%。特别令人鼓舞的是,与最复杂的数据集的整体表示相比,BoW的准确性提高了14%(从77%增至91%),这被用来测试泛化能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号