首页> 外文会议>International Joint Conference on Artificial Intelligence >Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators
【24h】

Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators

机译:通过具有双判别器的条件生成的对抗性网络学习融合红外和可见图像的生成模型

获取原文

摘要

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.
机译:在本文中,我们提出了一种新的端到端模型,称为双鉴别器条件生成的对抗网络(DDCGAN),用于融合不同分辨率的红外和可见图像。与像素级方法和现有的基于深度学习的方法不同,除了专门设计的内容丢失之外,还通过发电机和两种鉴别器之间的对抗过程来完成融合任务。发电机训练以产生真实的融合图像以欺骗鉴别器。训练了两个鉴别器以计算下采样的融合图像和红外图像的概率分布之间的JS发散,以及分别融合图像的梯度梯度的概率分布与可见图像的梯度之间的JS发散。因此,融合图像可以补偿不受单个内容损耗的不约束的特征。因此,可以在同时保留或甚至可以在融合图像中保留在红外图像中的热目标和可见图像中的纹理细节的突出。此外,通过约束和区分下采样的融合图像和低分辨率红外图像,可以优选地应用于不同分辨率图像的融合。公开可用数据集的定性和定量实验证明了我们对最先进的方法的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号