...
首页> 外文期刊>Journal of vision >A new method for comparing scanpaths based on vectors and dimensions
【24h】

A new method for comparing scanpaths based on vectors and dimensions

机译:一种基于向量和维数比较扫描路径的新方法

获取原文
   

获取外文期刊封面封底 >>

       

摘要

We make different sequences of eye movements a?? or scanpaths a?? depending on what we are viewing and the current task we are carrying out (e.g. Land, Mennie, & Rusted, 1999). In recent years, research efforts have been very informative in identifying commonalities between scanpath pairs, allowing us to quantify, for example, the similarity in eye movement behaviour between experts and novices (Underwood, Humphrey, & Foulsham, 2008), or between encoding and recognition of the same image (Foulsham & Underwood, 2008). However, common methods for comparing scanpaths (e.g., a??string-edita??, based on Levenshtein, 1966, or a??positon measuresa??, see Mannan, Ruddock, & Wooding, 1995) fail to capture both the spatial and temporal aspects of scanpaths. Even the newest techniques (e.g., a??Scanmatcha??, Cristino, Math?′t, Theeuwes, & Gilchrist, 2010) are restricted by the fact that they rely on the division of space into Areas of Interest (AOIs), thus limiting the spatial resolution of the similarity metric produced. Here we validate a new algorithm for comparing scanpaths (Jarodzka, Holmqvist, & Nystr??m, 2010) with eye movement data from human observers. Instead of relying on the quantization of space into AOIs, our method represents scanpaths as geometrical vectors, which retain temporal order and spatial position. Scanpaths are then compared across several dimensions a?? shape, position, length, direction, and duration a?? and a similarity value is returned for each. Using this new multidimensional approach, our data from two experiments highlights aspects of scanpath similarity which cannot otherwise be quantified: when scanpaths are clearly similar, but are spatially downscaled, for instance. Moreover, we show how scanpath similarity changes depending on task, using our algorithm in comparison to the most popular alternatives. This data demonstrates that our vector-based multi-dimensional approach to scanpath comparison is favorable to others, and should encourage a shift away from methods which are rooted in the Levenstein principle or spatial position alone.
机译:我们使眼睛运动的顺序不同?或扫描路径?取决于我们正在查看的内容和当前正在执行的任务(例如Land,Mennie和Rusted,1999年)。近年来,研究工作在确定扫描路径对之间的共性方面提供了非常有益的信息,使我们能够量化专家和新手之间的眼动行为相似性(Underwood,Humphrey和Foulsham,2008年)或编码与对同一图像的识别(Foulsham&Underwood,2008)。但是,比较扫描路径的常用方法(例如,基于Levenshtein,1966年的“ string-edita”或“ positon measurea”,请参见Mannan,Ruddock和Wooding,1995)无法同时捕获空间和扫描路径的时间方面。即使是最新的技术(例如“ Scanmatcha”,Cristino,Math?t,Theeuwes和Gilchrist,2010)也受到以下事实的限制:它们依赖于将空间划分为感兴趣区域(AOI),因此限制产生的相似性指标的空间分辨率。在这里,我们验证了一种用于比较扫描路径(Jarodzka,Holmqvist和Nystr ?? m,2010)与人类观察者的眼动数据的新算法。我们的方法不依赖于将空间量化成AOI,而是将扫描路径表示为几何矢量,这些几何矢量保留了时间顺序和空间位置。然后在多个维度上比较扫描路径a ??形状,位置,长度,方向和持续时间并为每个返回相似值。使用这种新的多维方法,我们来自两个实验的数据突出了扫描路径相似性的方面,这些方面无法通过其他方式量化:例如,当扫描路径明显相似但在空间​​上缩小了比例。此外,与最流行的替代方法相比,我们使用算法展示了扫描路径相似性如何根据任务而变化。这些数据表明,我们基于矢量的扫描路径比较多维方法对其他方法比较有利,并且应该鼓励摆脱基于Levenstein原理或仅基于空间位置的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号