首页> 外文期刊>Neurocomputing >Automatic stereoscopic video generation based on virtual view synthesis
【24h】

Automatic stereoscopic video generation based on virtual view synthesis

机译:基于虚拟视图合成的自动立体视频生成

获取原文
获取原文并翻译 | 示例

摘要

Automatically synthesizing 3D content from a causal monocular video has become an important problem. Previous works either use no geometry information, or rely on precise 3D geometry information. Therefore, they cannot obtain reasonable results if the 3D structure in the scene is complex, or noisy 3D geometry information is estimated from monocular videos. In this paper, we present an automatic and robust framework to synthesize stereoscopic videos from casual 2D monocular videos. First, 3D geometry information (e.g., camera parameters, depth map) are extracted from the 2D input video. Then a Bayesian-based View Synthesis (BVS) approach is proposed to render high-quality new virtual views for stereoscopic video to deal with noisy 3D geometry information. Extensive experiments on various videos demonstrate that BVS can synthesize more accurate views than other methods, and our proposed framework also outperforms state-of-the-art automatic 2D-to-3D conversion approaches. (C) 2014 Elsevier B.V. All rights reserved.
机译:从因果关系的单眼视频中自动合成3D内容已成为一个重要问题。以前的作品要么不使用任何几何信息,要么依赖精确的3D几何信息。因此,如果场景中的3D结构复杂,或者从单眼视频中估计出嘈杂的3D几何信息,他们将无法获得合理的结果。在本文中,我们提出了一个自动且健壮的框架,可以从休闲2D单眼视频合成立体视频。首先,从2D输入视频中提取3D几何信息(例如,相机参数,深度图)。然后,提出了一种基于贝叶斯的视图合成(BVS)方法,为立体视频渲染高质量的新虚拟视图,以处理嘈杂的3D几何信息。在各种视频上进行的大量实验表明,BVS可以比其他方法合成更准确的视图,并且我们提出的框架还优于最新的2D到3D自动转换方法。 (C)2014 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号