首页> 外文会议>IEEE Sensor Array and Multichannel Signal Processing Workshop >Coupled Adversarial Learning for Single Image Super-Resolution
【24h】

Coupled Adversarial Learning for Single Image Super-Resolution

机译:结合对抗性学习实现单图像超分辨率

获取原文

摘要

Generative adversarial nets (GAN) have been widely used in several image restoration tasks such as image denoise, enhancement, and super-resolution. The objective functions of an image super-resolution problem based on GANs usually are reconstruction error, semantic feature distance, and GAN loss. In general, semantic feature distance was used to measure the feature similarity between the super-resolved and ground-truth images, to ensure they have similar feature representations. However, the feature is usually extracted by the pre-trained model, in which the feature representation is not designed for distinguishing the extracted features from low-resolution and high-resolution images. In this study, a coupled adversarial net (CAN) based on Siamese Network Structure is proposed, to improve the effectiveness of the feature extraction. In the proposed CAN, we offer GAN loss and semantic feature distances simultaneously, reducing the training complexity as well as improving the performance. Extensive experiments conducted that the proposed CAN is effective and efficient, compared to state-of-the-art image super-resolution schemes.
机译:生成对抗网络(GAN)已广泛用于多种图像恢复任务,例如图像降噪,增强和超分辨率。基于GAN的图像超分辨率问题的目标函数通常是重构误差,语义特征距离和GAN丢失。通常,使用语义特征距离来测量超分辨图像和真实图像之间的特征相似度,以确保它们具有相似的特征表示。然而,特征通常是由预训练模型提取的,其中特征表示不被设计为用于将提取的特征与低分辨率和高分辨率图像区分开。为了提高特征提取的有效性,提出了一种基于暹罗网络结构的耦合对抗网络(CAN)。在提出的CAN中,我们同时提供GAN丢失和语义特征距离,从而减少了训练复杂度并提高了性能。与最新的图像超分辨率方案相比,进行的大量实验表明,所提出的CAN是有效和高效的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号