【24h】

Cinematic Virtual Reality With Motion Parallax From a Single Monoscopic Omnidirectional Image

机译:单幅全向影像中具有运动视差的电影虚拟现实

获取原文
获取原文并翻译 | 示例

摘要

Complementary advances in the fields of virtual reality (VR) and reality capture have led to a growing demand for VR experiences that enable users to convincingly move around in an environment created from a real-world scene. Most methods address this issue by first acquiring a large number of image samples from different viewpoints. However, this is often costly in both time and hardware requirements, and is incompatible with the growing selection of existing, casually-acquired 360-degree images available online. In this paper, we present a novel solution for cinematic VR with motion parallax that instead only uses a single monoscopic omnidirectional image as input. We provide new insights on how to convert such an image into a scene mesh, and discuss potential uses of this representation. We notably propose using a VR interface to manually generate a 360-degree depth map, visualized as a 3D mesh and modified by the operator in real-time. We applied our method to different real-world scenes, and conducted a user study comparing meshes created from depth maps of different levels of accuracy. The results show that our method enables perceptually comfortable VR viewing when users move around in the scene.
机译:虚拟现实(VR)和现实捕捉领域的互补进步导致对VR体验的需求不断增长,使用户能够令人信服地在由真实场景创建的环境中四处移动。大多数方法通过首先从不同角度获取大量图像样本来解决此问题。但是,这通常在时间和硬件要求上都很昂贵,并且与越来越多的现有在线获取的随意获取的360度图像不兼容。在本文中,我们提出了一种具有运动视差的电影VR的新颖解决方案,该解决方案仅使用单个单视场全向图像作为输入。我们提供了有关如何将此类图像转换为场景网格的新见解,并讨论了这种表示形式的潜在用途。我们特别建议使用VR界面手动生成360度深度图,将其可视化为3D网格并由操作员实时修改。我们将我们的方法应用于不同的现实世界场景,并进行了一项用户研究,比较了从不同精度级别的深度图创建的网格。结果表明,当用户在场景中四处移动时,我们的方法可以使用户感觉舒适地观看VR。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号