首页> 外文期刊>IEEE Transactions on Geoscience and Remote Sensing >AFNet: Adaptive Fusion Network for Remote Sensing Image Semantic Segmentation
【24h】

AFNet: Adaptive Fusion Network for Remote Sensing Image Semantic Segmentation

机译:AFNET:用于遥感图像语义分割的自适应融合网络

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Semantic segmentation of remote sensing images plays an important role in many applications. However, a remote sensing image typically comprises a complex and heterogenous urban landscape with objects in various sizes and materials, which causes challenges to the task. In this work, a novel adaptive fusion network (AFNet) is proposed to improve the performance of very high resolution (VHR) remote sensing image segmentation. To coherently label size-varied ground objects from different categories, we design multilevel architecture with the scale-feature attention module (SFAM). By SFAM, at the location of small objects, low-level features from the shallow layers of convolutional neural network (CNN) are enhanced, whilst for large objects, high-level features from deep layers are enhanced. Thus, the features of size-varied objects could be preserved during fusing features from different levels, which helps to label size-varied objects. As for labeling the category with high intra-class difference and varied scales, the multiscale structure with a scale-layer attention module (SLAM) is utilized to learn representative features, where an adjacent score map refinement module (ACSR) is employed as the classifier. By SLAM, when fusing multiscale features, based on the interested objects scale, feature map from appropriate scale is given greater weights. With such a scale-aware strategy, the learned features can be more representative, which is helpful to distinguish objects for semantic segmentation. Besides, the performance is further improved by introducing several nonlinear layers to the ACSR. Extensive experiments conducted on two well-known public high-resolution remote sensing image data sets show the effectiveness of our proposed model. Code and predictions are available at https://github.com/athauna/AFNet/
机译:遥感图像的语义分割在许多应用中起着重要作用。然而,遥感图像通常包括具有各种尺寸和材料的物体的复杂和异源城市景观,这会导致任务的挑战。在这项工作中,提出了一种新的自适应融合网络(AFNET)来提高非常高分辨率(VHR)遥感图像分割的性能。要从不同类别连贯地标记尺寸变化的地面对象,我们使用尺度特征注意模块(SFAM)设计多级架构。通过SFAM,在小物体的位置,来自卷积神经网络(CNN)的浅层的低级功能增强,同时为大型物体,深层层的高级别特征得到增强。因此,可以在来自不同级别的熔化特征期间保留尺寸变化物体的特征,这有助于标记尺寸变化的物体。对于用高级别差异和变化的类别标记类别,利用具有鳞片层注意力模块(SLAM)的多尺度结构来学习代表特征,其中相邻的得分映射细化模块(ACSR)被用作分类器。通过SLAM,当融合多尺度特征时,基于感兴趣的对象比例,来自适当刻度的特征映射为更大的权重。利用这种尺度感知策略,学习的功能可以更具代表性,这有助于区分对象进行语义分割。此外,通过向ACSR引入若干非线性层来进一步改善性能。在两个众所周知的公共高分辨率遥感图像数据集上进行的广泛实验表明了我们所提出的模型的有效性。 https://github.com/athauna/afnet/

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号