首页> 外文会议>Conference on Stereoscopic Displays and Virtual Reality Systems >Predictive Coding of Depth Images Across Multiple Views
【24h】

Predictive Coding of Depth Images Across Multiple Views

机译:在多个视图中预测深度图像的编码

获取原文

摘要

A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows the viewer to select his preferred viewpoint, or 3D TV where the depth of the scene can be perceived using a special display. Because the user-selected view does not always correspond to a camera position, it may be necessary to synthesize a virtual camera view. To synthesize such a virtual view, we have adopted a depth-image-based rendering technique that employs one depth map for each camera. Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data. This paper presents a predictive-coding algorithm for the compression of depth images across multiple views. The presented algorithm provides (a) an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264), and (b), a random access to different views for fast rendering. The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data avoids an independent transmission of depth for each view, while simplifying the view interpolation by synthesizing depth images for arbitrary view points. We present experimental results for several multiview depth sequences, that result in a quality improvement of up to 1.8 Db as compared to H.264 compression.
机译:通常从一组同步摄像机获得3D视频流,该相机同时捕获相同的场景(多视图视频)。该技术使得诸如纯PointPoint视频等应用,该视频允许观看者选择他的首选观点,或者可以使用特殊显示器感知场景深度的3D电视。因为用户所选视图并不总是对应于相机位置,所以可能需要合成虚拟相机视图。为了合成这样的虚拟视图,我们采用了一种基于深度图像的渲染技术,该技术采用了每个相机的一个深度图。因此,3D视频的远程渲染需要用于纹理和深度数据的压缩技术。本文介绍了一种预测编码算法,用于跨多个视图压缩深度图像。所提出的算法提供(a)基于块的运动补偿编码器(H.264)和(b)的基于块的运动补偿编码器(H.264)的深度图像的改进的编码效率,对不同视图进行随机访问以快速渲染。所提出的深度预测技术通过基于参考深度图像来合成/计算3D点深度来工作。深度预测算法的吸引力是深度数据的预测避免了每个视图的深度自动传输,同时通过合成任意观点的深度图像来简化视图插值。我们对几种多视图深度序列呈现实验结果,导致与H.264压缩相比的高达1.8 dB的质量提高。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号