首页> 外文期刊>Remote Sensing >Dense Connectivity Based Two-Stream Deep Feature Fusion Framework for Aerial Scene Classification
【24h】

Dense Connectivity Based Two-Stream Deep Feature Fusion Framework for Aerial Scene Classification

机译:基于密集连接的两流深度特征融合框架用于空中场景分类

获取原文
获取外文期刊封面目录资料

摘要

Aerial scene classification is an active and challenging problem in high-resolution remote sensing imagery understanding. Deep learning models, especially convolutional neural networks (CNNs), have achieved prominent performance in this field. The extraction of deep features from the layers of a CNN model is widely used in these CNN-based methods. Although the CNN-based approaches have obtained great success, there is still plenty of room to further increase the classification accuracy. As a matter of fact, the fusion with other features has great potential for leading to the better performance of aerial scene classification. Therefore, we propose two effective architectures based on the idea of feature-level fusion. The first architecture, i.e., texture coded two-stream deep architecture, uses the raw RGB network stream and the mapped local binary patterns (LBP) coded network stream to extract two different sets of features and fuses them using a novel deep feature fusion model. In the second architecture, i.e., saliency coded two-stream deep architecture, we employ the saliency coded network stream as the second stream and fuse it with the raw RGB network stream using the same feature fusion model. For sake of validation and comparison, our proposed architectures are evaluated via comprehensive experiments with three publicly available remote sensing scene datasets. The classification accuracies of saliency coded two-stream architecture with our feature fusion model achieve 97.79%, 98.90%, 94.09%, 95.99%, 85.02%, and 87.01% on the UC-Merced dataset (50% and 80% training samples), the Aerial Image Dataset (AID) (20% and 50% training samples), and the NWPU-RESISC45 dataset (10% and 20% training samples), respectively, overwhelming state-of-the-art methods.
机译:空中场景分类是高分辨率遥感影像理解中一个活跃而具有挑战性的问题。深度学习模型,尤其是卷积神经网络(CNN),在该领域已取得了杰出的表现。从CNN模型的各层中提取深层特征在这些基于CNN的方法中被广泛使用。尽管基于CNN的方法取得了巨大的成功,但仍有大量空间可以进一步提高分类精度。实际上,与其他功能的融合具有极大的潜力,可导致更好的空中场景分类性能。因此,我们基于特征级融合的思想提出了两种有效的体系结构。第一种架构,即纹理编码的两流深度架构,使用原始的RGB网络流和映射的本地二进制模式(LBP)编码的网络流来提取两组不同的特征,并使用新颖的深度特征融合模型将其融合。在第二种架构中,即显着性编码双流深层体系结构,我们将显着性编码网络流用作第二种流,并使用相同的特征融合模型将其与原始RGB网络流融合。为了进行验证和比较,我们对提出的架构进行了全面的实验,并使用三个公开的遥感场景数据集进行了评估。在我们的UC-Merced数据集(50%和80%训练样本)上,采用我们的特征融合模型的显着性编码两流体系结构的分类精度达到97.79%,98.90%,94.09%,95.99%,85.02%和87.01%。航空影像数据集(AID)(20%和50%的训练样本)和NWPU-RESISC45数据集(10%和20%的训练样本)分别压倒了最新技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号