We present a new approach for faster rendering of large synthetic environments using video-based representations. We decompose the large environment into cells and pre-compute video based impostors using MPEG compression to represent sets of objects that are far from each cell. At runtime, we decode the MPEG streams and use rendering algorithms that provide nearly constant-time random access to any frame. The resulting system has been implemented and used for an interactive walkthrough of a model of a house with 260,000 polygons and realistic lighting and textures. It is able to render this model at 16 frames per second (an eightfold improvement over simpler algorithms) on average on a Pentium II PC with an off-the-shelf graphics card.
我们提出了一种新方法,可以使用基于视频的表示法更快地渲染大型合成环境。我们将大型环境分解为单元,并使用MPEG压缩预先计算基于视频的冒名顶替者 I>,以表示远离每个单元的对象集。在运行时,我们对MPEG流进行解码,并使用提供几乎恒定时间随机访问任何帧的渲染算法。生成的系统已实现,并用于交互式演练具有260,000个多边形以及逼真的照明和纹理的房屋模型。在装有现成图形卡的奔腾II PC上,它能够平均以每秒16帧的速度渲染该模型(比简单的算法提高了八倍)。 P>
机译:从多个全向图像进行任意视点渲染以进行交互式演练
机译:从多个全向图像进行任意视点渲染以进行交互式演练
机译:从多个全向映像渲染的任意观点渲染,用于交互式演练
机译:基于视频的交互式演练加速算法
机译:基于图像的渲染协议,用于移动设备上的远程交互式演练。
机译:从可穿戴传感器的原始加速信号开发和验证开源活动强度计数和活动强度分类算法
机译:从多个全向图像进行任意视点渲染以进行交互式演练