首页> 外文期刊>IEEE Transactions on Image Processing >Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging
【24h】

Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging

机译:利用低光成像光场的多视角

获取原文
获取原文并翻译 | 示例

摘要

Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions severely limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We propose a deep neural network L3Fnet for Low-Light Light Field (L3F) restoration, which not only performs visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light restoration methods, we also propose the L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we use a pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of single-frame images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF . Our code and dataset is available for download at https://mohitlamba94.github.io/L3Fnet/
机译:光场(LF)提供独特的优点,如捕获后的重键和深度估计,但低光条件严重限制了这些功能。为了恢复低光LFS,我们应该利用不同的LF视图中存在的几何线索,这是使用单帧低光增强技术无法实现的。我们提出一个深神经网络<斜体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”> l3fnet 对于<斜体xmlns:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http://www.w3.org/1999/xlink”>低 - 光灯场(L3F)恢复,不仅执行每个LF视图的可视化增强,而且还保留了跨视图的ePipolar几何形状。我们通过采用L3Fnet的两级架构来实现这一目标。阶段 - 我看起来只需编码LF几何的所有LF视图。然后在阶段-I中使用该编码信息以重建每个LF视图。为了促进基于学习的低光LF成像技术,我们收集了各种场景的全面LF数据集。对于每个场景,我们捕获了四个LFS,一个带有近乎最佳曝光和ISO设置的LFS和ISO设置,不同级别的低光线条件不同于低电平到极端低光设置。所提出的L3FNET的有效性是在此数据集上的视觉和数值比较的支持。为了进一步分析低光恢复方法的性能,我们还提出了<斜体XMLNS:MML =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“http:// www。 w3.org/1999/xlink“> l3f-wild 数据集,其中包含晚上捕获的lf,几乎zux lux值。此数据集中没有地面真相。要在L3F-Wild数据集上执行良好,任何方法都必须适应捕获的场景的光电平。为此,我们使用预处理块,使L3FNET具有各种度的低光条件。最后,我们表明L3FNET还可以用于单帧图像的低光增强,尽管它被设计为LF数据。我们通过将单帧DSLR图像转换为适合于L3FNET的形式,我们称为<斜体XMLNS:mml =“http://www.w3.org/1998/math/mathml”xmlns:xlink =“ http://www.w3.org/1999/xlink“> pseudo-lf 。我们的代码和数据集可用于在 https://mohitlamba94.github.io/l3fnet/

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号