首页> 外文期刊>Electronic Letters on Computer Vision and Image Analysis: ELCVIA >Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters
【24h】

Detail Enhanced Multi-Exposure Image Fusion Based On Edge Preserving Filters

机译:基于边缘保留滤镜的细节增强多曝光图像融合

获取原文
       

摘要

Recent computational photography techniques play a significant role to overcome the limitation of standard digital cameras for handling wide dynamic range of real-world scenes contain brightly and poorly illuminated areas. In many of such techniques [1,2,3], it is often desirable to fuse details from images captured at different exposure settings, while avoiding visual artifacts. One such technique is High Dynamic Range (HDR) imaging that provides a solution to recover radiance maps from photographs taken with conventional imaging equipment. The process of HDR image composition needs the knowledge of exposure times and Camera Response Function (CRF), which is required to linearize the image data before combining Low Dynamic Range (LDR) exposures into HDR image. One of the long-standing challenges in HDR imaging technology is the limited Dynamic Range (DR) of conventional display devices and printing technology. Due to which these devices are unable to reproduce full DR. Although DR can be reduced by using a tone-mapping, but this comes at an unavoidable trade-off with increased computational cost. Therefore, it is desirable to maximize information content of the synthesized scene from a set of multi-exposure images without computing HDR radiance map and tone-mapping. This research attempts to develop a novel detail enhanced multi-exposure image fusion approach based on texture features, which exploits the edge preserving and intra-region smoothing property of nonlinear diffusion filters based on Partial Differential Equations (PDE). With the captured multi-exposure image series, we first decompose images into Base Layers (BLs) and Detail Layers (DLs) to extract sharp details and fine details, respectively. The magnitude of the gradient of the image intensity is utilized to encourage smoothness at homogeneous regions in preference to inhomogeneous regions. In the next step texture features of the BL to generate a decision mask (i.e., local range) have been considered that guide the fusion of BLs in multi-resolution fashion. Finally, well-exposed fused image is obtained that combines fused BL and the DL at each scale across all the input exposures. The combination of edge-preserving filters with Laplacian pyramid is shown to lead to texture detail enhancement in the fused image. Furthermore, Non-linear adaptive filter is employed for BL and DL decomposition that has better response near strong edges. The texture details are then added to the fused BL to reconstruct a detail enhanced LDR version of the image. This leads to an increased robustness of the texture details while at the same time avoiding gradient reversal artifacts near strong edges that may appear in fused image after DL enhancement. Finally, we propose a novel technique for exposure fusion in which Weighted Least Squares (WLS) optimization framework is utilized for weight map refinement of BLs and DLs, which lead to a new simple weighted average fusion framework. Computationally simple texture features (i.e. DL) and color saturation measure are preferred for quickly generating weight maps to control the contribution from an input set of multi-exposure images. Instead of employing intermediate HDR reconstruction and tone-mapping steps, well-exposed fused image is generated for displaying on conventional display devices. Simulation results are compared with a number of existing single resolution and multi-resolution techniques to show the benefits of the proposed scheme for the variety of cases. Moreover, the approaches proposed in this thesis are effective for blending flash and no-flash image pair, and multi-focus images, that is, input images photographed with and without flash, and images focused on different targets, respectively. A further advantage of the present technique is that it is well suited for detail enhancement in the fused image.
机译:最近的计算摄影技术在克服标准数码相机的局限性方面起着重要作用,该标准数码相机在处理包含明亮和照明较差区域的真实场景的宽动态范围内。在许多这样的技术中[1,2,3],通常希望融合来自在不同曝光设置下捕获的图像的细节,同时避免出现视觉伪像。一种这样的技术是高动态范围(HDR)成像,它提供了一种解决方案,可以从使用常规成像设备拍摄的照片中恢复辐射图。 HDR图像合成过程需要了解曝光时间和相机响应功能(CRF),在将低动态范围(LDR)曝光组合成HDR图像之前,需要对图像数据进行线性处理。 HDR成像技术的长期挑战之一是传统显示设备和打印技术的动态范围(DR)有限。因此,这些设备无法重现完整的DR。尽管可以通过使用音调映射来减少DR,但这是不可避免的折衷,同时增加了计算成本。因此,期望在不计算HDR辐射度图和色调映射的情况下,从一组多曝光图像中最大化合成场景的信息内容。本研究试图开发一种基于纹理特征的新型细节增强多曝光图像融合方法,该方法利用基于偏微分方程(PDE)的非线性扩散滤波器的边缘保留和区域内平滑特性。使用捕获的多重曝光图像系列,我们首先将图像分解为基本层(BL)和细节层(DL),分别提取出清晰的细节和精细的细节。图像强度梯度的大小用于鼓励在均质区域优先于非均质区域的平滑度。在下一步中,已经考虑了用于生成决策掩模(即,局部范围)的BL的纹理特征,其以多分辨率方式指导BL的融合。最后,获得曝光良好的融合图像,该图像在所有输入曝光的各个比例下将融合的BL和DL组合在一起。保留边缘的滤镜与拉普拉斯金字塔的组合显示出融合图像中纹理细节的增强。此外,非线性自适应滤波器用于BL和DL分解,在强边缘附近具有更好的响应。然后将纹理细节添加到融合的BL中,以重建图像的细节增强LDR版本。这导致纹理细节的鲁棒性提高,同时避免了在DL增强后融合图像中可能出现的强边缘附近的梯度反转伪像。最后,我们提出了一种曝光融合的新技术,其中将加权最小二乘(WLS)优化框架用于BL和DL的权重图细化,从而产生了一个新的简单的加权平均融合框架。为了快速生成权重图以控制来自多曝光图像输入集的贡献,优选计算简单的纹理特征(即DL)和色彩饱和度度量。代替采用中间的HDR重建和色调映射步骤,生成曝光良好的融合图像以在常规显示设备上显示。仿真结果与大量现有的单分辨率和多分辨率技术进行了比较,以表明该方案在各种情况下的优势。此外,本文提出的方法对于混合闪光和不闪光图像对以及多焦点图像(即,分别带有和不带有闪光灯拍摄的输入图像以及分别聚焦于不同目标的图像)是有效的。本技术的另一个优点是,它非常适合于融合图像中的细节增强。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号