...
首页> 外文期刊>Neural computing & applications >CSA-CDGAN: channel self-attention-based generative adversarial network for change detection of remote sensing images
【24h】

CSA-CDGAN: channel self-attention-based generative adversarial network for change detection of remote sensing images

机译:CSA-CDGAN: channel self-attention-based generative adversarial network for change detection of remote sensing images

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Remote sensing images change detection (RSICD) is a task to identify desired significant differences between multi-temporal images acquired at different times. From the existing methods, most of them solved this issue with a Siamese network, focusing on how to utilize the comparison between two image features to generate an initial difference map. However, Siamese network-based methods have three drawbacks: (1) complex architecture; (2) rough change map; (3) cumbersome detecting procedure: including feature extraction and feature comparison. To overcome the above drawbacks, we devoted our work to design a general framework which has a simple architecture, integrated detecting procedure, and good capacity of detecting subtle changes. In this paper, we proposed a channel self-attention network based on the generative adversarial network for change detection of remote sensing images. The network used an encoder–decoder network to directly produce a change map from two input images. It was better to detect small punctate and slim linear changes than Siamese-based networks. By regarding RSICD as an image translation problem, we used a Generative Adversarial Network to detect changes. In addition, a channel self-attention module was proposed to further improve the performance of this network. Experimental results on three public remote sensing RGB-image datasets, including change detection dataset, Wuhan University building change detection dataset and LEVIR building Change Detection dataset demonstrated that our method outperformed other state-of-the-art methods. In terms of the F1 score, the proposed method achieved maximum improvements of 5.1%, 3.1%, and 1.7% on the above datasets, respectively. Models and codes will be available at https://github.com/wangle53/CSA-CDGAN.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号