首页> 外文期刊>Computers & Graphics >Depth of field synthesis from sparse views
【24h】

Depth of field synthesis from sparse views

机译:稀疏视图的景深综合

获取原文
获取原文并翻译 | 示例

摘要

Computer generated images are most easily generated as pinhole images whereas images obtained with optical lenses exhibit a Depth-of-Field (DOF) effect. This is due to the fact that optical lenses gather light across finite apertures whereas the simulation of a pinhole lens means that the light is gathered through an infinitesimal small aperture, thus producing sharp images at any depth. Simulating the physical process of gathering light across a finite aperture can be done for example with distributed ray tracing, but it is computationally much more expensive than the simulation through an infinitesimal aperture. The usual way of simulating lens effects is therefore to produce a pinhole image and then post processes the image to approximate the DOF. Post processing algorithms are fast but suffer from incorrect visibilities. In this paper, we propose a novel algorithm that tackles the visibility issue with a sparse set of views rendered through the optical center of the lens and several peripheral viewpoints distributed on the lens. All peripheral images are warped towards the central view to create a Layered-Depth-Image (LDI), so that all observed 3D points located on the same central view-ray are stacked on the same pixel of the LDI. Then, each pixel in the LDI is conceptually scattered into a Point-Spread-Function (PSF) and blended in depth order. While the scatter method is very inefficient on a GPU, we propose a selective gather method for DOF synthesis, which scans the neighborhood of a pixel and blends the colors from the PSFs covering the pixel. Experiments show that the proposed algorithm can synthesize high-quality DOF effects close to the results of distributed ray tracing but at a much higher speed. (C) 2015 Elsevier Ltd. All rights reserved.
机译:计算机生成的图像最容易生成为针孔图像,而使用光学镜头获得的图像则表现出景深(DOF)效果。这是由于以下事实:光学透镜会在有限的孔径上聚集光,而针孔透镜的模拟意味着会通过无限小的小孔径聚集光,从而在任何深度产生清晰的图像。例如,可以使用分布式射线跟踪来模拟跨有限孔径收集光的物理过程,但是与通过无限小孔径进行模拟相比,计算要昂贵得多。因此,模拟镜头效果的常用方法是产生一个针孔图像,然后对该图像进行后处理以近似自由度。后处理算法速度很快,但存在不正确的可见性。在本文中,我们提出了一种新颖的算法,该算法通过通过透镜的光学中心和分布在透镜上的几个外围视点呈现的稀疏视图来解决可见性问题。所有外围图像都朝着中心视图弯曲以创建分层深度图像(LDI),以便将位于同一中心视图射线上的所有观察到的3D点堆叠在LDI的同一像素上。然后,LDI中的每个像素在概念上都会分散到点扩展函数(PSF)中,并按深度顺序进行混合。尽管散布方法在GPU上效率很低,但我们提出了一种用于DOF合成的选择性聚集方法,该方法可以扫描像素的邻域并混合覆盖像素的PSF的颜色。实验表明,该算法可以合成高质量的自由度效果,接近分布式光线跟踪的结果,但速度要快得多。 (C)2015 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号