首页> 外文期刊>Broadcasting, IEEE Transactions on >Spatio-Temporally Consistent Novel View Synthesis Algorithm From Video-Plus-Depth Sequences for Autostereoscopic Displays
【24h】

Spatio-Temporally Consistent Novel View Synthesis Algorithm From Video-Plus-Depth Sequences for Autostereoscopic Displays

机译:自动立体显示器的视频加深度序列时空一致的新颖视图合成算法

获取原文
获取原文并翻译 | 示例
       

摘要

In this paper, we propose a novel algorithm to generate multiple virtual views from a video-plus-depth sequence for modern autostereoscopic displays. To synthesize realistic content in the disocclusion regions at the virtual views is the main challenging problem for this task. Spatial coherence and temporal consistency are the two key factors to produce perceptually satisfactory virtual images. The proposed algorithm employs the spatio-temporal consistency constraint to handle the uncertain pixels in the disocclusion regions. On the one hand, regarding the spatial coherence, we incorporate the intensity gradient strength with the depth information to determine the filling priority for inpainting the disocclusion regions, so that the continuity of image structures can be preserved. On the other hand, the temporal consistency is enforced by estimating the intensities in the disocclusion regions across the adjacent frames with an optimization process. We propose an iterative re-weighted framework to jointly consider intensity and depth consistency in the adjacent frames, which not only imposes temporal consistency but also reduces noise disturbance. Finally, for accelerating the multi-view synthesis process, we apply the proposed view synthesis algorithm to generate the intensity and depth maps at the leftmost and rightmost viewpoints, so that the intermediate views are efficiently interpolated through image warping according to the associated depth maps between the two synthesized images and their corresponding symmetric depths. In the experimental validation, we perform quantitative evaluation on synthetic data as well as subjective assessment on real video data with comparison to some representative methods to demonstrate the superior performance of the proposed algorithm.
机译:在本文中,我们提出了一种新颖的算法,可从视频加深度序列为现代自动立体显示器生成多个虚拟视图。在虚拟视图中在遮挡区域中合成现实内容是此任务面临的主要挑战。空间连贯性和时间一致性是产生感知上令人满意的虚拟图像的两个关键因素。所提出的算法采用时空一致性约束来处理遮挡区域中的不确定像素。一方面,关于空间相干性,我们将强度梯度强度与深度信息结合在一起,以确定用于修复遮挡区域的填充优先级,从而可以保留图像结构的连续性。另一方面,通过使用优化处理来估计相邻帧上的遮挡区域中的强度来增强时间一致性。我们提出了一个迭代的重新加权框架,以共同考虑相邻帧中的强度和深度一致性,这不仅强加了时间一致性,还减少了噪声干扰。最后,为了加速多视点合成过程,我们采用了建议的视点合成算法在最左和最右视点生成强度和深度图,从而根据图像之间的相关联的深度图有效地对中间视点进行插值。两个合成图像及其对应的对称深度。在实验验证中,我们与一些代表性方法进行了比较,对合成数据进行了定量评估,并对真实视频数据进行了主观评估,以证明所提出算法的优越性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号