...
首页> 外文期刊>Journal of vision >Improving gaze accuracy and predicting fixation in real time with video based eye trackers
【24h】

Improving gaze accuracy and predicting fixation in real time with video based eye trackers

机译:使用基于视频的眼动仪提高注视精度并实时预测注视

获取原文
           

摘要

Studies of eye movements require accurate gaze, fixation and saccade detection, and most recent studies use video based eye-trackers for this purpose. We present two methods which significantly improve current eye tracking technology, with only minor additions to standard experimental protocols. First, for video-based eye trackers, we characterize a significant pupil-size dependent artifact which systematically biases reported gaze position. By varying display luminance while subjects maintain fixation, we observe corresponding changes in pupil size inducing a gaze position error and obtain an empirical solution to correct it. Applying our technique in software to a commercial video-based eye tracker, we obtain a substantial improvement in the accuracy of gaze position. After correction, the standard deviation of gaze positions around a point of fixation during a 10 second interval reduces by as much as 7.5?? and 5.9?? in the worst case, with an average reduction of 2.29?? and 2.95?? across subjects (n = 6) and screen positions (m = 9), for horizontal and vertical directions, respectively. Additionally, we describe a simple yet effective method for predicting the next fixation during a saccade in flight. Leveraging the relationship between peak velocity and time left in a saccade, we are able to fit model parameters to individual subjects and then use on-line velocity data to predict future fixations. To evaluate the scheme, subjects free-viewed a four minute introduction of a nature documentary. For a stimulus display refresh rate of 100 Hz, we correctly predict fixation onsets to within a frame 95% of the time. Our methodology improves gaze accuracy and allows experimenters a direct access to a window of time immediately around the onset of fixation, opening the door for gaze and saccade contingent experiments using current commercial eye trackers.
机译:对眼睛运动的研究需要准确的注视,注视和扫视检测,并且最近的研究为此目的使用基于视频的眼动仪。我们提出了两种方法,这些方法可显着改善当前的眼动追踪技术,仅对标准实验方案进行了少量添加。首先,对于基于视频的眼动仪,我们描述了一个依赖瞳孔大小的重要伪影,该伪影会系统地偏向报告的凝视位置。通过在主体保持注视的同时改变显示亮度,我们观察到瞳孔尺寸的相应变化,从而引起凝视位置误差,并获得了经验性的解决方案来对其进行校正。将我们的软件技术应用到基于商业视频的眼动仪中,我们在凝视位置的准确性方面获得了实质性的改善。校正后,在10秒间隔内注视点在注视点周围的标准偏差减小了7.5?。和5.9 ??在最坏的情况下,平均减少2.29 ??和2.95 ??分别在水平和垂直方向上跨越主体(n = 6)和屏幕位置(m = 9)。此外,我们描述了一种简单而有效的方法,用于预测飞行扫视期间的下一个注视点。利用峰值速度和扫视运动剩余时间之间的关系,我们能够将模型参数拟合到各个对象,然后使用在线速度数据来预测将来的注视。为了评估该方案,受试者自由观看了自然纪录片的四分钟介绍。对于100 Hz的刺激显示刷新率,我们可以正确地预测在95%的时间内在一帧之内的注视发作。我们的方法提高了凝视的准确性,并允许实验人员在注视开始后立即直接进入时间窗口,从而为使用当前的商用眼动仪进行凝视和扫视或然实验打开了大门。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号