首页> 外文会议>Human Factors and Ergonomics Society annual meeting;HFES 2008 >Integrating Head Pose Tracking with Eye Tracking for Wider FOV Virtual Environment
【24h】

Integrating Head Pose Tracking with Eye Tracking for Wider FOV Virtual Environment

机译:将头部姿势跟踪与眼动跟踪相集成,以实现更宽的FOV虚拟环境

获取原文

摘要

In virtual environment based driving experiments, eye-tracking is very helpful for quantitatively investigating driver visual behavior. However, commercial eye trackers usually have a limited maximum tracking range, e.g. ± 35degrees in horizontal direction for Tobii x50. In high fidelity applications with 135 degrees view, or even 360 degrees curved screen, there is a need to extend the range of eye tracking. This paper proposed a method to combine head pose tracking and eye tracking to achieve a large range of sightline tracking in wide field of view applications. Head poses (yaw and pitch) were estimated from head images with Multilayer Perceptrons (MLP). Head images were represented with a combination of coefficients obtained from Principal Component Analysis (PCA). With a 7×6 calibration grid, head pose estimation achieved the mean errors of yaw angle at.-2.1° and pitch angle at -6.5°. The standard deviation was about 10°. The estimation process is feasible for real-time wide field of view virtual environment applications.
机译:在基于虚拟环境的驾驶实验中,眼动追踪对于定量研究驾驶员的视觉行为非常有帮助。然而,商业眼动仪通常具有有限的最大跟踪范围,例如最大跟踪范围。对于Tobii x50,水平方向±35度。在具有135度视角甚至360度曲面屏幕的高保真应用中,需要扩展眼睛跟踪的范围。本文提出了一种结合头部姿态跟踪和眼睛跟踪的方法,以在广阔的视场应用中实现大范围的视线跟踪。使用多层感知器(MLP)从头部图像估计头部姿势(偏航和俯仰)。头部图像用从主成分分析(PCA)获得的系数组合表示。使用7×6校准网格,头部姿势估计可实现-2.1°偏航角和-6.5°俯仰角的平均误差。标准偏差为约10°。该估计过程对于实时宽视场虚拟环境应用程序是可行的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号