【24h】

Algorithm for dynamic disparity adjustment

机译:动态视差调整的算法

获取原文

摘要

This paper presents an algorithm for enhancing stereo depth cues for moving computer generated 3D images. The algorithm incorporates the results from an experiment in which observers were allowed to set their preferred eye separation with a set of moving screens. The data derived from this experiment were used to design an algorithm for the dynamic adjustment of eye separation (or disparity) depending on the scene characteristics. The algorithm has the following steps: (1)Determine the near and far points in the computer graphics scene to be displayed. This is done by sampling the Z buffer. (2) Scale the scene about a point corresponding to the midpoint between the observer's two eyes. This scaling factor is calculated so that the nearest part of the scene comes to be located just behind the monitor. (3) Adjust an eye separation parameter to create stereo depth according to the empirical function derived from the initial study. This has the effect of doubling the stereo depth in flat scene but limiting the stereo depth for deep scenes. Steps 2 and 3 both have the effect of reducing the discrepancy between focus and vergence for most scenes. The algorithm is applied dynamically in real time with a damping factor applied so the disparities never change too abruptly.
机译:本文介绍了一种增强移动计算机生成的3D图像的立体声深度线索的算法。该算法包含实验中的结果,其中允许观察者设定与一组移动屏幕的首选眼睛分离。根据该实验导出的数据,用于根据场景特征设计用于眼睛分离(或差异)的动态调整的算法。该算法具有以下步骤:(1)确定要显示的计算机图形学场景中的近和远角。这是通过采样z缓冲区来完成的。 (2)缩放场景大约对应于观察者两只眼睛之间的中点的点。计算该缩放因子,使得场景的最近部分位于监视器后面。 (3)调整眼睛分离参数以根据初始研究的经验函数创建立体深度。这具有将立体声深度加倍平面的效果,但限制了深度场景的立体声深度。步骤2和3都具有在大多数场景中降低焦点和痛苦之间的差异的效果。该算法实时应用,施加阻尼因子,因此差距永远不会突然改变。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号