...
首页> 外文期刊>Applied Soft Computing >A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy
【24h】

A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy

机译:一种新型多焦点图像融合,通过组合简化的非常深卷积网络和基于补丁的顺序重建策略

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Multi-focus image fusion is an important approach to obtain the composite image with all objects in focus, and it can be treated as an image segmentation problem, which is solved by convolutional neural networks (CNN). For CNN-based multi-focus image fusion methods, public training dataset does not exist, and the network model determines the recognition accuracy of the focused and defocused pixels. Considering these problems, we proposed a novel CNN-based multi-focus image fusion method by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy in this study. Firstly, the defocused images with five blurred levels were simulated by the Gaussian filter, and a novel training dataset was constructed for multi-focus image fusion. Secondly, the very deep convolutional networks model was simplified to design a Siamese CNN model, and this model was used to recognize the focused and defocused pixels. Thirdly, the focused and defocused regions were detected by the patch-based sequential reconstruction strategy, and the final decision map was refined by the morphological operator. Finally, the multi-focus image fusion was performed. Lytro dataset as a public multi-focus image dataset was used to prove the validation of the proposed method. Information entropy, mutual information, universal image quality index, visual information fidelity, and edge retention were adopted as evaluation metrics, and the proposed method was compared with state-of-the-art methods. Experimental results demonstrated that the proposed method can achieve state-of-the-art fusion results in terms of visual quality and objective assessment. (C) 2020 Elsevier B.V. All rights reserved.
机译:多焦点图像融合是通过焦点中的所有对象获得合成图像的重要方法,并且可以被视为图像分割问题,该问题由卷积神经网络(CNN)解决。对于基于CNN的多聚焦图像融合方法,不存在公共训练数据集,并且网络模型确定聚焦和离焦像素的识别准确性。考虑到这些问题,我们通过在本研究中结合了简化的非常深卷积网络和基于补丁的顺序重建策略,提出了一种基于CNN的多焦图像融合方法。首先,通过高斯滤波器模拟具有五个模糊水平的离焦图像,为多焦图像融合构建了一种新的训练数据集。其次,简化了非常深的卷积网络模型以设计暹罗CNN模型,并且该模型用于识别聚焦和离焦像素。第三,通过基于补丁的顺序重建策略来检测聚焦和离焦区域,并通过形态运算符改进最终决策图。最后,执行多聚焦图像融合。 Lytro DataSet作为公共多焦点图像数据集用于证明该方法的验证。信息熵,相互信息,通用图像质量指数,视觉信息保真度和边缘保留作为评估度量,并将所提出的方法与最先进的方法进行比较。实验结果表明,该方法可以在视觉质量和客观评估方面实现最先进的融合。 (c)2020 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号