首页> 外文期刊>Human-computer interaction >A Computational Model of 'Active Vision' for Visual Search in Human-Computer Interaction
【24h】

A Computational Model of 'Active Vision' for Visual Search in Human-Computer Interaction

机译:人机交互中视觉搜索的“主动视觉”计算模型

获取原文
获取原文并翻译 | 示例

摘要

Human visual search plays an important role in many human-computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as "active vision" (Findlay & Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the devel-opment and validation of the model. The modeling asks-and at least partially answers-the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory), (c) The eyes tend to go to nearby objects, (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.
机译:人工视觉搜索在许多人机交互(HCI)任务中起着重要作用。更好的视觉搜索模型不仅需要预测整体性能结果,例如人们是否能够找到完成HCI任务所需的信息,还需要了解在视觉搜索中相互作用的许多人为过程,从而告知详细设计更好的用户界面。本文以计算认知模型的形式,详细介绍了人类视觉处理的一种综合理论,即“主动视觉”(Findlay&Gilchrist,2003)。该计算模型是使用执行过程交互式控制认知体系结构构建的。来自三个实验的眼动追踪数据为该模型的开发和验证提供了信息。建模提出(至少部分回答)主动视觉的四个问题:(a)在注视中可以感知到什么? (b)眼睛何时移动? (c)眼睛在哪里移动? (d)眼动之间整合了哪些信息?答案包括:(a)靠近注视点的物品更容易被感知到,有时会误识别物体的视觉特征。 (b)在处理了固定的视觉刺激后(即进入了工作记忆),眼睛移动了;(c)眼睛趋向于靠近附近的物体;(d)仅可能获得了固定的内容的粗略空间信息固定之间保持。为回答这些问题而开发的模型具有科学和实践价值,因为该模型可以使HCI研究人员和从业人员更好地了解人们如何与计算机进行视觉交互,并为可预测该交互方面的预测分析工具提供理论基础。

著录项

  • 来源
    《Human-computer interaction》 |2011年第4期|p.285-314|共30页
  • 作者单位

    Air Force Research Laboratory;

    Department of Computer and Information Science at the University of Oregon;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号