首页> 外文会议>International symposium on photoelectronic detection and imaging >Real-time arbitrary view synthesis method for ultra-HD auto-stereoscopic display
【24h】

Real-time arbitrary view synthesis method for ultra-HD auto-stereoscopic display

机译:超高清自动立体显示的实时任意视图合成方法

获取原文

摘要

An arbitrary view synthesis method from 2D-Plus-Depth image for real-time auto-stereoscopic display is presented. Traditional methods use depth image based rendering (DIBR) technology, which is a process of synthesizing "virtual" views of a scene from still or moving images and associated per-pixel depth information. All the virtual view images are generated and then the ultimate stereo-image is synthesized. DIBR can greatly decrease the number of reference images and is flexible and efficient as the depth images are used. However it causes some problems such as the appearance of holes in the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Here, reversed disparity shift pixel rendering is used to generate the stereo-image directly, and the target image won't generate holes. To avoid duplication of calculation and also to be able to match with any specific three-dimensional display, a selecting table is designed to pick up appropriate virtual viewpoints for auto-stereoscopic display. According to the selecting table, only sub-pixels of the appropriate virtual viewpoints are calculated, so calculation amount is independent of the number of virtual viewpoints. In addition, 3D image warping technology is used to translate depth information to parallax between virtual viewpoints and parallax, and the viewer can adjust the zero-parallax-setting-plane (ZPS) and change parallax conveniently to suit his/her personal preferences. The proposed method is implemented with OPENGL and demonstrated on a laptop computer with a 2.3 GHz Intel Core i5 CPU and NVIDA GeForce GT540m GPU. We got a frame rate 30 frames per second with 4096×2340 video. High synthesis efficiency and good stereoscopic sense can be obtained. The presented method can meet the requirements of real-time ultra-HD super multi-view auto-stereoscopic display.
机译:提出了一种基于二维加深度图像的任意视图合成方法,用于实时自动立体显示。传统方法使用基于深度图像的渲染(DIBR)技术,该技术是从静止或运动图像以及关联的每像素深度信息合成场景的“虚拟”视图的过程。生成所有虚拟视图图像,然后合成最终的立体图像。 DIBR可以大大减少参考图像的数量,并且随着使用深度图像而变得灵活高效。但是,这会引起一些问题,例如在渲染的图像中出现孔洞,以及在虚像平面上的对象表面上出现深度不连续。在这里,反向视差移位像素渲染用于直接生成立体图像,而目标图像不会生成空洞。为了避免重复计算并且也能够与任何特定的三维显示器匹配,设计了一个选择表来选择合适的虚拟视点以进行自动立体显示。根据选择表,仅计算适当的虚拟视点的子像素,因此计算量与虚拟视点的数量无关。此外,3D图像扭曲技术用于将深度信息转换为虚拟视点和视差之间的视差,并且观看者可以调整零视差设置平面(ZPS)并方便地更改视差以适应他/她的个人喜好。所提出的方法是使用OPENGL实现的,并在配备2.3 GHz Intel Core i5 CPU和NVIDA GeForce GT540m GPU的笔记本电脑上进行了演示。我们以4096×2340视频获得了每秒30帧的帧速率。可以获得高合成效率和良好的立体感。所提出的方法可以满足实时超高清超多视点自动立体显示的要求。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号