首页> 外文期刊>ACM Transactions on Graphics >Video Extrapolation Using Neighboring Frames
【24h】

Video Extrapolation Using Neighboring Frames

机译:使用相邻帧的视频外推

获取原文
获取原文并翻译 | 示例

摘要

With the popularity of immersive display systems that fill the viewer's field of view (FOV) entirely, demand for wide FOV content has increased. A video extrapolation technique based on reuse of existing videos is one of the most efficient ways to produce wide FOV content. Extrapolating a video poses a great challenge, however, due to the insufficient amount of cues and information that can be leveraged for the estimation of the extended region. This article introduces a novel framework that allows the extrapolation of an input video and consequently converts a conventional content into one with wide FOV. The key idea of the proposed approach is to integrate the information from all frames in the input video into each frame. Utilizing the information from all frames is crucial because it is very difficult to achieve the goal with a two-dimensional transformation based approach when parallax caused by camera motion is apparent. Warping guided by three-dimensnional scene points matches the viewpoints between the different frames. The matched frames are blended to create extended views. Various experiments demonstrate that the results of the proposed method are more visually plausible than those produced using state-of-the-art techniques.
机译:随着完全覆盖观看者视场(FOV)的沉浸式显示系统的普及,对宽FOV内容的需求不断增加。基于现有视频重用的视频外推技术是产生宽视场内容的最有效方法之一。但是,由于无法充分利用提示和信息的数量来估算扩展区域,因此对视频进行外推提出了很大的挑战。本文介绍了一种新颖的框架,该框架允许对输入视频进行外推,从而将常规内容转换为具有宽FOV的内容。提出的方法的关键思想是将来自输入视频中所有帧的信息集成到每个帧中。利用来自所有帧的信息至关重要,因为当摄像机运动引起的视差明显时,使用基于二维变换的方法很难实现这一目标。由三维场景点引导的变形与不同帧之间的视点匹配。匹配的框架将融合在一起以创建扩展视图。各种实验表明,与使用最新技术所产生的结果相比,所提出方法的结果在视觉上更具说服力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号