首页> 外文会议>Conference on Stereoscopic Displays and Virtual Reality Systems XI; 20040119-20040122; San Jose,CA; US >Temporally Consistent Virtual Camera Generation from Stereo Image Sequences
【24h】

Temporally Consistent Virtual Camera Generation from Stereo Image Sequences

机译:从立体图像序列临时生成虚拟相机

获取原文
获取原文并翻译 | 示例

摘要

The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.
机译:自动立体3D观看技术的最新出现增加了对3D视频内容创建的需求。已经开发了一系列无眼镜的多查看器屏幕,这些屏幕要求为每个视频帧生成多达9个视图。这在视图生成和传输带宽上都存在困难。本文研究了使用立体视频捕获作为通过视差分析生成多个场景视图的方法。应用机器学习方法来学习视差生成的深度信息和源镜头之间的关系,并以时间上平滑的方式为左眼和右眼图像序列生成深度信息。描述了一种用于多视图渲染的视图变形方法,该方法在一系列无眼镜显示器上提供了出色的3D效果,同时为不准确的立体视差计算提供了鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号