首页> 外文期刊>Quality Control, Transactions >Classification of Very High-Resolution Remote Sensing Imagery Using a Fully Convolutional Network With Global and Local Context Information Enhancements
【24h】

Classification of Very High-Resolution Remote Sensing Imagery Using a Fully Convolutional Network With Global and Local Context Information Enhancements

机译:使用全局和本地上下文信息增强的全卷积网络非常高分辨率遥感图像的分类

获取原文
获取原文并翻译 | 示例
           

摘要

Deep learning methods for semantic image segmentation can effectively extract geographical features from very high-resolution (VHR) remote sensing images. However, these methods experience over-segmentation in low-level features and a breakdown in the integrity of objects with fixed patch sizes due to the multi-scaled geographical features. In this study, a dual attention mechanism is introduced and embedded into densely connected convolutional networks (DenseNets) to form a dense-global-entropy network (DGEN) for the semantic segmentation of VHR remote sensing images. In the DGEN architecture, a global attention enhancement module is developed for context acquisition, and a local attention fusion module is designed for detail selection. This network presents the improved semantic segmentation performance of test ISPRS 2D datasets. The experimental results indicate an improvement in the overall accuracy (OA), F1, kappa coefficient and mean intersection over union (MIoU). Compared with the DeeplabV3 and SegNet models, the OA improves by 2.79 and 1.19; the mean F1 improves by 3.43 and 0.88; the kappa coefficient improves by 4.04 and 1.82; and the MIoU improves by 5.22 and 1.47, respectively. The experiments showed that the dual attention mechanism presented in this study can improve segmentation and maintain object integrity during the encoding-decoding process.
机译:语义图像分割的深度学习方法可以有效地提取来自非常高分辨率(VHR)遥感图像的地理特征。然而,这些方法在低级功能中经历过分割,并且由于多级地理特征,具有固定补丁大小的对象的完整性的崩溃。在这项研究中,将双重注意机制引入并嵌入到密集地连接的卷积网络(DENSENET)中,以形成VHR遥感图像的语义分割的密集全球熵网络(DGEN)。在DGEN架构中,为上下文采集开发了全局注意力增强模块,设计了本地关注融合模块以进行详细选择。该网络介绍了测试ISPRS 2D数据集的改进的语义分割性能。实验结果表明,整体精度(OA),F1,κ系数和平均交叉口的改善(Miou)。与DEEPLABV3和SEGNET模型相比,OA可提高2.79和1.19;平均f1可提高3.43和0.88; Kappa系数可提高4.04和1.82;而Miou分别提高了5.22和1.47。实验表明,本研究中呈现的双重注意机制可以在编码解码过程中改善分割并维持对象完整性。

著录项

  • 来源
    《Quality Control, Transactions》 |2020年第2020期|14606-14619|共14页
  • 作者单位

    Wuhan Univ Sch Resource & Environm Sci Wuhan 430079 Peoples R China;

    Hubei Inst Land Surveying & Mapping Wuhan 430010 Peoples R China;

    Wuhan Univ Sch Resource & Environm Sci Wuhan 430079 Peoples R China|Wuhan Univ RE Inst Smart Percept & Intelligent Comp Wuhan 430079 Peoples R China;

    Wuhan Univ Sch Resource & Environm Sci Wuhan 430079 Peoples R China|Anhui Univ Inst Phys Sci & Informat Technol Hefei 230601 Peoples R China;

    Wuhan Univ Sch Resource & Environm Sci Wuhan 430079 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Attention mechanism; DenseNet; semantic segmentation; very high-resolution remote sensing images;

    机译:注意机制;DENSENET;语义分割;非常高分辨率的遥感图像;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号