首页> 外文期刊>Computational Imaging, IEEE Transactions on >A Unified Learning-Based Framework for Light Field Reconstruction From Coded Projections
【24h】

A Unified Learning-Based Framework for Light Field Reconstruction From Coded Projections

机译:一种统一的基于学习的学习框架,用于从编码投影中的光场重建

获取原文
获取原文并翻译 | 示例
       

摘要

Light fields present a rich way to represent the 3D world by capturing the spatio-angular dimensions of the visual signal. However, the popular way of capturing light fields (LF) via a plenoptic camera presents a spatio-angular resolution trade-off. To address this issue, computational imaging techniques such as compressive light field and programmable coded aperture have been proposed, which reconstruct full sensor resolution LF from coded projections of the LF. Here, we present a unified learning framework that can reconstruct LF from a variety of multiplexing schemes with minimal number of coded images as input. We consider three light field capture schemes: heterodyne capture scheme with code placed near the sensor, coded aperture scheme with code at the camera aperture and finally the dual exposure scheme of capturing a focus-defocus pair where there is no explicit coding. Our algorithm consists of three stages: Firstly, we recover the all-in-focus image from the coded image. Secondly, we estimate the disparity maps for all the LF views from the coded image and the all-in-focus image. And finally, we render the LF by warping the all-in-focus image using the estimated disparity maps. We show that our proposed learning algorithm performs either on par with or better than the state-of-the-art methods for all the three multiplexing schemes. LF from focus-defocus pair is especially attractive as it requires no hardware modification and produces LF reconstructions that are comparable to the current state of the art learning-based view synthesis approaches from multiple images. Thus, our work paves the way for capturing full-resolution LF using conventional cameras such as DSLRs and smartphones.
机译:光场通过捕获视觉信号的时空尺寸来提出丰富的方式来代表3D世界。然而,经由增压镜捕获光场(LF)的流行方式呈现了一种时空分辨率折衷。为了解决这个问题,已经提出了诸如压缩光场和可编程编码孔径的计算成像技术,其从LF的编码投影重建完全传感器分辨率LF。在这里,我们介绍了一个统一的学习框架,可以使用与输入的数量最小的编码图像重新计算LF。我们考虑三个光场捕获方案:外差捕获方案,具有在传感器附近放置的代码,编码孔径方案,在相机孔处,最后捕获焦点散焦对的双曝光方案,其中没有明确的编码。我们的算法由三个阶段组成:首先,我们从编码图像中恢复全焦焦点图像。其次,我们估计来自编码图像和全焦焦图像的所有LF视图的差异映射。最后,我们通过使用估计的差异图翘曲全焦焦点来渲染LF。我们表明我们所提出的学习算法与所有三种多路复用方案的最先进的方法相对于或更好地执行。 LF从焦点 - 散焦对尤其具有吸引力,因为它不需要硬件修改,并且产生与来自多个图像的最新状态的当前状态相当的LF重建。因此,我们的工作铺平了使用诸如DSLR和智能手机等传统摄像机捕获全分辨率LF的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号