首页> 外文会议>International Conference on Artificial Intelligence and Soft Computing >Dense Multi-focus Fusion Net: A Deep Unsupervised Convolutional Network for Multi-focus Image Fusion
【24h】

Dense Multi-focus Fusion Net: A Deep Unsupervised Convolutional Network for Multi-focus Image Fusion

机译:密集的多焦融合网:用于多重焦点图像融合的深度无监督卷积网络

获取原文
获取外文期刊封面目录资料

摘要

In this paper, we introduce a novel unsupervised deep learning (DL) method for multi-focus image fusion. Existing multi-focus image fusion (MFIF) methods based on DL treat MFIF as a classification problem with a massive amount of reference images to train networks. Instead, we proposed an end-to-end unsupervised DL model to fuse multi-focus color images without reference ground truth images. As compared to conventional CNN our proposed model only consists of convolutional layers to achieve a promising performance. In our proposed network, all layers in the feature extraction networks are connected to each other in a feedforward way and aim to extract more useful common low-level features from multi-focus image pair. Instead of using conventional loss functions our model utilizes image structure similarity (SSIM) to calculate loss in the reconstruction process. Our proposed model can process variable size images during testing and validation. Experimental results on various test images validate that our proposed method achieves state-of-the-art performance in both subjective and objective evaluation metrics.
机译:在本文中,我们介绍了一种用于多重焦点图像融合的新型无监督的深度学习(DL)方法。基于DL处理MFIF的现有多聚焦图像融合(MFIF)方法作为培训网络的大量参考图像的分类问题。相反,我们提出了一个端到端无监督的DL模型,用于保险为多焦彩色图像而不参考地面真相图像。与传统的CNN相比,我们所提出的模型仅由卷积层组成,以实现有希望的性能。在我们所提出的网络中,特征提取网络中的所有层以前馈方式彼此连接,并旨在从多焦点图像对中提取更有用的常见低级特征。我们的模型使用图像结构相似性(SSIM)来计算重建过程中的损耗来计算重建过程中的损耗而不是使用传统损失功能。我们所提出的模型可以在测试和验证期间处理变量尺寸图像。各种测试图像的实验结果验证了我们所提出的方法在主观和客观评估度量中实现最先进的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号