首页> 外文期刊>ACM Transactions on Interactive Intelligent Systems >Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks
【24h】

Comparing and Combining Interaction Data and Eye-tracking Data for the Real-time Prediction of User Cognitive Abilities in Visualization Tasks

机译:相互作用数据和眼睛跟踪数据对可视化任务中用户认知能力实时预测的

获取原文
获取原文并翻译 | 示例

摘要

Previous work has shown that some user cognitive abilities relevant for processing information visualizations can be predicted from eye-tracking data. Performing this type of user modeling is important for devising visualizations that can detect a user's abilities and adapt accordingly during the interaction. In this article, we extend previous user modeling work by investigating for the first time interaction data as an alternative source to predict cognitive abilities during visualization processing when it is not feasible to collect eye-tracking data. We present an extensive comparison of user models based solely on eye-tracking data, on interaction data, as well as on a combination of the two. Although we found that eye-tracking data generate the most accurate predictions, results show that interaction data can still outperform a majority-class baseline, meaning that adaptation for interactive visualizations could be enabled even when it is not feasible to perform eye tracking, using solely interaction data. Furthermore, we found that interaction data can predict several cognitive abilities with better accuracy at the very beginning of the task than eye-tracking data, which are valuable for delivering adaptation early in the task. We also extend previous work by examining the value of multimodal classifiers combining interaction data and eye-tracking data, with promising results for some of our target user cognitive abilities. Next, we contribute to previous work by extending the type of visualizations considered and the set of cognitive abilities that can be predicted from either eye-tracking data and interaction data. Finally, we evaluate how noise in gaze data impacts prediction accuracy and find that retaining rather noisy gaze datapoints can yield equal or even better predictions than discarding them, a novel and important contribution for devising adaptive visualizations in real settings where eye-tracking data are typically noisier than in laboratory settings.
机译:以前的工作表明,可以从眼跟踪数据预测对处理信息可视化相关的一些用户认知能力。执行这种类型的用户建模对于设计可视化的可视化对于在交互期间可以相应地进行改进的可视化是重要的。在本文中,我们通过调查第一时间交互数据作为替代源来扩展先前的用户建模工作,以预测可视化处理期间的认知能力,以便收集眼睛跟踪数据是不可行的。我们对用户模型仅基于眼睛跟踪数据的广泛比较,以及两者的组合。虽然我们发现眼睛跟踪数据生成最准确的预测,但结果表明交互数据仍然可以胜过大多数类基线,这意味着即使在不可行的情况下,也可以使用单独执行眼睛跟踪,这是可以启用交互式可视化的适应交互数据。此外,我们发现交互数据可以在任务开始时比目跟踪数据的开始,以更好的准确性预测多个认知能力,这对于在任务中早期提供适应性的有价值。我们还通过检查组合交互数据和眼睛跟踪数据的多模式分类器的价值来扩展以前的工作,这是我们的一些目标用户认知能力的有希望的结果。接下来,我们通过扩展所考虑的可视化类型以及可以从眼睛跟踪数据和交互数据预测的一组认知能力的集合来贡献先前的工作。最后,我们评估凝视数据中的噪声如何影响预测准确性,并发现保留相当嘈杂的凝视数据点可以产生比丢弃它们的更好或更好的预测,这是一种用于在实际设置中设计自适应可视化的新颖和重要的贡献比在实验室设置中吵闹。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号