...
首页> 外文期刊>IEEE Geoscience and Remote Sensing Letters >SiftingGAN: Generating and Sifting Labeled Samples to Improve the Remote Sensing Image Scene Classification Baseline In Vitro
【24h】

SiftingGAN: Generating and Sifting Labeled Samples to Improve the Remote Sensing Image Scene Classification Baseline In Vitro

机译:SiftingGAN:生成和筛选带标签的样本以改善遥感影像场景分类基准体外

获取原文
获取原文并翻译 | 示例
           

摘要

Lack of annotated samples greatly restrains the direct application of deep learning in remote sensing image scene classification. Although research studies have been done to tackle this issue by data augmentation with various image transformation operations, they are still limited in quantity and diversity. Recently, the advent of the unsupervised learning-based generative adversarial networks (GANs) brings us a new way to generate augmented samples. However, such GAN-generated samples are currently only served for training GANs model itself and for improving the performance of the discriminator in GANs internally (in vivo). It becomes a question of serious doubt whether the GAN-generated samples can help better improve the scene classification performance of other deep learning networks (in vitro), compared with the widely used transformed samples. To answer this question, this letter proposes a SiftingGAN approach to generate more numerous, more diverse, and more authentic labeled samples for data augmentation. SiftingGAN extends traditional GAN framework with an Online-Output method for sample generation, a GenerativeModel-Sifting method for model sifting, and a Labeled-Sample-Discriminating method for sample sifting. Experiments on the well-known aerial image data set demonstrate that the proposed SiftingGAN method can not only effectively improve the performance of the scene classification baseline that is achieved without data augmentation but also significantly excels the comparison methods based on traditional geometric/radiometric transformation operations.
机译:注释样本的缺乏极大地限制了深度学习在遥感图像场景分类中的直接应用。尽管已经进行了各种图像转换操作的数据增强来解决此问题的研究,但它们的数量和多样性仍然受到限制。最近,无监督的基于学习的生成对抗网络(GAN)的出现为我们提供了一种生成增强样本的新方法。但是,此类GAN生成的样本目前仅用于训练GAN模型本身,并用于提高内部(体内)GAN中鉴别器的性能。与广泛使用的转换样本相比,GAN生成的样本是否可以帮助更好地改善其他深度学习网络(体外)的场景分类性能,这成为一个严重的疑问。为了回答这个问题,这封信提出了一种SiftingGAN方法来生成更多,更多样化和更可靠的带标签的样本以进行数据增强。 SiftingGAN扩展了传统GAN框架,包括用于样本生成的在线输出方法,用于模型筛选的GenerativeModel-Sifting方法和用于样本筛选的标签样本区分方法。在著名的航空影像数据集上进行的实验表明,所提出的SiftingGAN方法不仅可以有效地提高场景分类基线的性能,而无需增加数据即可实现该性能,而且在基于传统几何/放射线变换操作的比较方法上也具有明显的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号