首页> 外文期刊>IEEE Transactions on Image Processing >Unsupervised Deep Image Fusion With Structure Tensor Representations
【24h】

Unsupervised Deep Image Fusion With Structure Tensor Representations

机译:与结构张量表示的无监督深度图像融合

获取原文
获取原文并翻译 | 示例

摘要

Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to image fusion has remained challenging due to the lack of the labelled data for supervised learning. This paper introduces a deep image fusion network (DIF-Net), an unsupervised deep learning framework for image fusion. The DIF-Net parameterizes the entire processes of image fusion, comprising of feature extraction, feature fusion, and image reconstruction, using a CNN. The purpose of DIF-Net is to generate an output image which has an identical contrast to high-dimensional input images. To realize this, we propose an unsupervised loss function using the structure tensor representation of the multi-channel image contrasts. Different from traditional fusion methods that involve time-consuming optimization or iterative procedures to obtain the results, our loss function is minimized by a stochastic deep learning solver with large-scale examples. Consequently, the proposed method can produce fused images that preserve source image details through a single forward network trained without reference ground-truth labels. The proposed method has broad applicability to various image fusion problems, including multi-spectral, multi-focus, and multi-exposure image fusions. Quantitative and qualitative evaluations show that the proposed technique outperforms existing state-of-the-art approaches for various applications.
机译:卷积神经网络(CNNS)促进了计算机视觉和图像处理中各种问题的实质性进展。然而,由于缺乏监督学习的标记数据,将它们应用于图像融合的挑战是挑战。本文介绍了深度图像融合网络(差异),是一种无监督的图像融合的深层学习框架。 DIF-NET参数化图像融合的整个过程,包括使用CNN的特征提取,特征融合和图像重建。差异的目的是生成输出图像,该输出图像与高维输入图像具有相同的对比度。为了实现这一点,我们使用多通道图像对比度的结构张量表示提出了无监督的损失函数。与传统融合方法不同,涉及耗时的优化或迭代程序以获得结果,我们的损耗功能由具有大规模示例的随机深度学习求解器最小化。因此,所提出的方法可以产生融合图像,该融合图像通过在没有参考地面标签的情况下训练的单个向前网络保留源图像细节。该方法具有广泛适用于各种图像融合问题,包括多光谱,多焦点和多曝光图像融合。定量和定性评估表明,所提出的技术优于各种应用的现有最先进的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号