首页> 外文会议>International Conference on Pattern Recognition >Data Augmentation via Mixed Class Interpolation using Cycle-Consistent Generative Adversarial Networks Applied to Cross-Domain Imagery
【24h】

Data Augmentation via Mixed Class Interpolation using Cycle-Consistent Generative Adversarial Networks Applied to Cross-Domain Imagery

机译:通过应用于跨域图像的循环一致的生成对冲网络,通过混合类插值来增强数据增强

获取原文

摘要

Machine learning driven object detection and classification within non-visible imagery has an important role in many fields such as night vision, all-weather surveillance and aviation security. However, such applications often suffer due to the limited quantity and variety of non-visible spectral domain imagery, in contrast to the high data availability of visible-band imagery that readily enables contemporary deep learning driven detection and classification approaches. To address this problem, this paper proposes and evaluates a novel data augmentation approach that leverages the more readily available visible-band imagery via a generative domain transfer model. The model can synthesise large volumes of non-visible domain imagery by image-to-image (I2I) translation from the visible image domain. Furthermore, we show that the generation of interpolated mixed class (non-visible domain) image examples via our novel Conditional CycleGAN Mixup Augmentation (C2GMA) methodology can lead to a significant improvement in the quality of non-visible domain classification tasks that otherwise suffer due to limited data availability. Focusing on classification within the Synthetic Aperture Radar (SAR) domain, our approach is evaluated on a variation of the Statoil/C-CORE Iceberg Classifier Challenge dataset and achieves 75.4 % accuracy, demonstrating a significant improvement when compared against traditional data augmentation strategies (Rotation, Mixup, and MixCycleGAN).
机译:非可见图像中的机器学习驱动对象检测和分类在许多领域具有重要作用,例如夜视,全天候监控和航空安全。然而,这种应用经常由于不可见光谱域图像的数量和各种各种而受到影响,与可见乐队图像的高数据可用性相比,随心所欲地实现当代深度学习驱动的检测和分类方法。为了解决这个问题,本文提出了一种新颖的数据增强方法,通过生成域传输模型利用更容易可见的可见带图像。该模型可以通过来自可见图像域的图像到图像(I2i)转换来综合大量的非可见域图像。此外,我们表明,通过我们的小说条件Compygan混合增强(C2GMA)方法的内插混合类(非可见域)图像示例会导致否则不可见的域分类任务的质量的显着提高限制数据可用性。专注于在合成孔径雷达(SAR)域内的分类,我们的方法是对汀类药物/ C核心冰山分类器挑战数据集的变化进行评估,精度达到75.4%,与传统数据增强策略相比,展示了显着的改进(旋转,混合和混合卷曲)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号