首页> 外文学位 >Image-based spatiotemporal modeling and view interpolation of dynamic events.
【24h】

Image-based spatiotemporal modeling and view interpolation of dynamic events.

机译:基于图像的时空建模和动态事件的视图插值。

获取原文
获取原文并翻译 | 示例

摘要

Digital photographs and video are exciting inventions that let us capture the visual experience of events around us in a computer and re-live the experience, although in a restrictive manner. Photographs only capture snapshots of a dynamic event, and while video does capture motion, it is recorded from pre-determined positions and consists of images discretely sampled in time, so the timing cannot be changed.; This thesis presents an approach for re-rendering a dynamic event from an arbitrary viewpoint with any timing, using images captured from multiple video cameras. The event is modeled as a non-rigidly varying dynamic scene captured by many images from different viewpoints, at discretely sampled times. First, the spatio-temporal geometric properties (shape and instantaneous motion) are computed. Scene flow is introduced as a measure of non-rigid motion and algorithms to compute it, with the scene shape. The novel view synthesis problem is posed as one of recovering corresponding points in the original images, using the shape and scene flow. A reverse mapping algorithm, raycasting across space and time, is developed to compute a novel image from any viewpoint in the 4D space of position and time. Results are shown on real-world events captured in the CMU 3D Room, by creating synthetic renderings of the event from novel, arbitrary positions in space and time. Multiple such re-created renderings can be put together to create re-timed fly-by movies of the event, with the resulting visual experience richer than that of a regular video clip, or simply switching between frames from multiple cameras.
机译:数码照片和视频是令人兴奋的发明,它们使我们可以在计算机中捕获周围事件的视觉体验,并以有限的方式重新体验。照片仅捕获动态事件的快照,而视频确实捕获运动,但它是从预定位置记录的,并且由时间上离散采样的图像组成,因此无法更改时间。本文提出了一种使用从多个摄像机捕获的图像从任意视点以任何定时重新渲染动态事件的方法。该事件被建模为在离散采样时间内由来自不同视点的许多图像捕获的非刚性变化的动态场景。首先,计算时空几何特性(形状和瞬时运动)。引入了场景流作为非刚性运动的度量,并介绍了使用场景形状进行计算的算法。使用形状和场景流将新颖的视图合成问题提出为在原始图像中恢复对应点之一。开发了一种反向映射算法,跨空间和时间进行光线投射,以从位置和时间的4D空间中的任何角度计算新颖的图像。通过从空间和时间的新颖,任意位置创建事件的合成渲染,结果将显示在CMU 3D Room中捕获的真实事件中。可以将多个此类重新创建的渲染放在一起,以创建事件的重新定时飞过的电影,从而产生比常规视频剪辑更丰富的视觉体验,或者仅在多个摄像机的帧之间切换。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号