首页> 外文OA文献 >Real-time video-plus-depth content creation utilizing time-of-flight sensor - from capture to display
【2h】

Real-time video-plus-depth content creation utilizing time-of-flight sensor - from capture to display

机译:利用飞行时间传感器实现实时视频加深度内容创建 - 从捕获到显示

摘要

Recent developments in 3D camera technologies, display technologies and other related fields have been aiming to provide 3D experience for home user and establish services such as Three-Dimensional Television (3DTV) and Free-Viewpoint Television (FTV). Emerging multiview autostereoscopic displays do not require any eyewear and can be watched by multiple users at the same time, thus are very attractive for home environment usage. To provide a natural 3D impression, autostereoscopic 3D displays have been design to synthesize multi-perspective virtual views of a scene using Depth-Image-Based Rendering (DIBR) techniques. One key issue of DIBR is that scene depth information in a form of a depth map is required in order to synthesize virtual views. Acquiring this information is quite complex and challenging task and still an active research topic.In this thesis, the problem of dynamic 3D video content creation of real-world visual scenes is addressed. The work assumed data acquisition setting including Time-of-Flight (ToF) depth sensor and a single conventional video camera. The main objective of the work is to develop efficient algorithms for the stages of synchronous data acquisition, color and ToF data fusion, and final view-plus-depth frame formatting and rendering. The outcome of this thesis is a prototype 3DTV system capable for rendering live 3D video on a 3D autostereoscopic display. The presented system makes extensive use of the processing capabilities of modern Graphics Processing Units (GPUs) in order to achieve real-time processing rates while providing an acceptable visual quality. Furthermore, the issue of arbitrary view synthesis is investigated in the context of DIBR and a novel approach based on depth layering is proposed. The proposed approach is applicable for general virtual views synthesis, i.e. in terms of different camera parameters such as position, orientation, focal length and varying sensors spatial resolutions. The experimental results demonstrate real-time capability of the proposed method even for CPU-based implementations. It compares favorably to other view synthesis methods in terms of visual quality, while being more computationally efficient.
机译:3D相机技术,显示技术和其他相关领域的最新发展一直旨在为家庭用户提供3D体验并建立诸如三维电视(3DTV)和自由视点电视(FTV)的服务。新兴的多视图自动立体显示器不需要任何眼镜,并且可以同时被多个用户观看,因此对于家庭环境的使用非常有吸引力。为了提供自然的3D印象,已设计了自动立体3D显示器,以使用基于深度图像的渲染(DIBR)技术合成场景的多角度虚拟视图。 DIBR的一个关键问题是,需要深度图形式的场景深度信息才能合成虚拟视图。获取这些信息是一项非常复杂且具有挑战性的任务,并且仍然是一个活跃的研究课题。本文解决了现实视觉场景中动态3D视频内容创建的问题。这项工作假设数据采集设置包括飞行时间(ToF)深度传感器和单个传统摄像机。这项工作的主要目的是为同步数据获取,颜色和ToF数据融合以及最终的视图加深度帧格式化和渲染的阶段开发高效的算法。本文的结果是一个原型3DTV系统,该系统能够在3D自动立体显示器上渲染实时3D视频。提出的系统充分利用了现代图形处理单元(GPU)的处理能力,以实现实时处理速率,同时提供可接受的视觉质量。此外,在DIBR的背景下研究了任意视图合成的问题,并提出了一种基于深度分层的新方法。所提出的方法适用于一般的虚拟视图合成,即就诸如位置,方向,焦距和变化的传感器空间分辨率之类的不同相机参数而言。实验结果证明了所提出方法的实时功能,甚至适用于基于CPU的实现。就视觉质量而言,它与其他视图合成方法相比具有优势,同时计算效率更高。

著录项

  • 作者

    Chuchvara Aleksandra;

  • 作者单位
  • 年度 2014
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号