首页> 外文期刊>IEEE Transactions on Image Processing >Virtual View Synthesis for Free Viewpoint Video and Multiview Video Compression using Gaussian Mixture Modelling
【24h】

Virtual View Synthesis for Free Viewpoint Video and Multiview Video Compression using Gaussian Mixture Modelling

机译:使用高斯混合模型的免费视点视频和多视点视频压缩虚拟视图合成

获取原文
获取原文并翻译 | 示例

摘要

High quality virtual views need to be synthesized from adjacent available views for free viewpoint video and multiview video coding (MVC) to provide users with a more realistic 3D viewing experience of a scene. View synthesis techniques suffer from poor rendering quality due to holes created by occlusion and rounding integer error through warping. To remove the holes in the virtual view, the existing techniques use spatial and temporal correlation in intra/inter-view images and depth maps. However, they still suffer quality degradation in the boundary region of foreground and background areas due to the low spatial correlation in texture images and low correspondence in inter-view depth maps. To overcome the above-mentioned limitations, we use a number of models in the Gaussian mixture modeling (GMM) to separate background and foreground pixels in our proposed technique. Here, the missing pixels introduced from the warping process are recovered by the adaptive weighted average of the pixel intensities from the corresponding GMM model(s) and warped image. The weights vary with time to accommodate the changes due to a dynamic background and the motions of the moving objects for view synthesis. We also introduce an adaptive strategy to reset the GMM modeling if the contributions of the pixel intensities drop significantly. Our experimental results indicate that the proposed approach provides 5.40-6.60-dB PSNR improvement compared with the relevant methods. To verify the effectiveness of the proposed view synthesis technique, we use it as an extra reference frame in the motion estimation for MVC. The experimental results confirm that the proposed view synthesis is able to improve PSNR by 3.15-5.13 dB compared with the conventional three reference frames.
机译:对于免费的视点视频和多视点视频编码(MVC),需要从相邻的可用视点合成高质量的虚拟视点,以为用户提供更逼真的场景3D观看体验。由于遮挡和通过变形舍入整数误差而产生的孔洞,视图合成技术的渲染质量很差。为了消除虚拟视图中的孔,现有技术在视图内/视图间图像和深度图中使用空间和时间相关性。然而,由于纹理图像中的低空间相关性以及视图间深度图中的低对应性,它们在前景区域和背景区域的边界区域中仍然遭受质量下降。为了克服上述限制,在我们提出的技术中,我们在高斯混合建模(GMM)中使用了许多模型来分离背景像素和前景像素。在此,从变形过程引入的缺失像素通过来自相应GMM模型和变形图像的像素强度的自适应加权平均值进行恢复。权重随时间变化,以适应由于动态背景和用于视图合成的运动对象的运动而引起的变化。如果像素强度的贡献显着下降,我们还将引入一种自适应策略来重置GMM建模。我们的实验结果表明,与相关方法相比,该方法可提高5.40-6.60-dB PSNR。为了验证所提出的视图合成技术的有效性,我们将其用作MVC运动估计中的额外参考系。实验结果证实,与传统的三个参考帧相比,所提出的视图合成能够将PSNR提高3.15-5.13 dB。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号