首页> 外文期刊>Human-Machine Systems, IEEE Transactions on >Software Architecture for Automating Cognitive Science Eye-Tracking Data Analysis and Object Annotation
【24h】

Software Architecture for Automating Cognitive Science Eye-Tracking Data Analysis and Object Annotation

机译:自动化认知科学眼动数据分析和对象注释的软件体系结构

获取原文
获取原文并翻译 | 示例
       

摘要

The advancement of wearable eye-tracking technology enables cognitive researchers to capture vast amounts of eye gaze information while participants are completing specific tasks without restrictions on their movement. However, while eye trackers can overlay a gaze indicator on the scene video, identifying the specific objects being looked at and analyzing the resulting dataset are accomplished mostly by manual annotation. This method is a cost-prohibitive and time-consuming approach that is prone to human error. Such analytic difficulty limits researchers' ability to data mine the information efficiently, ultimately restricting the number of scenarios that can feasibly be conducted within budget. Here, the first fully automated solution for eye-tracking data analysis is presented, which eliminates the need for manual annotation. The proposed software architecture, gaze to object classification (GoC), processes the gaze-overlaid video from commercially available wearable eye trackers, recognizes and classifies the specific object a user is focusing on and calculates the gaze duration time. GoC utilizes an image cross-correlation method to locate the gaze indicator and an image similarity measurement to support faster processing. The presented system has been successfully adopted by cognitive psychologists. GoC's exceptional performance in analyzing a case study spanning over 50 h of mobile eye-tracking is presented. The accuracy and a cost-analysis comparison between GoC and state-of-the-art manual annotation software are provided. GoC has game-changing potential for increasing the ecological validity of using eye-tracking technology in cognitive research.
机译:可穿戴式眼动追踪技术的进步使认知研究人员能够在参与者完成特定任务而不受动作限制的同时捕获大量的视线信息。但是,尽管眼动仪可以将凝视指示器叠加在场景视频上,但是识别正在查看的特定对象并分析生成的数据集主要是通过手动注释来完成的。这种方法是一种成本高昂且费时的方法,容易发生人为错误。这种分析难度限制了研究人员有效地对信息进行数据挖掘的能力,最终限制了可以在预算范围内切实可行的方案数量。在这里,提出了第一个用于眼动数据分析的全自动解决方案,从而消除了手动注释的需要。所提出的软件结构,注视对象分类(GoC),处理来自市售可穿戴式眼动仪的注视覆盖视频,识别并分类用户正在关注的特定对象,并计算注视持续时间。 GoC利用图像互相关方法来定位凝视指示器和图像相似性测量值,以支持更快的处理。提出的系统已被认知心理学家成功采用。介绍了GoC在分析超过50小时的移动眼动追踪案例研究中的出色表现。提供了GoC和最新的手动注释软件之间的准确性和成本分析比较。 GoC具有改变游戏规则的潜力,可以提高在认知研究中使用眼动追踪技术的生态有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号