首页> 外文期刊>ACM Transactions on Interactive Intelligent Systems >Predicting User Confidence During Visual Decision Making
【24h】

Predicting User Confidence During Visual Decision Making

机译:在视觉决策过程中预测用户信心

获取原文
获取原文并翻译 | 示例
       

摘要

People are not infallible consistent "oracles": their confidence in decision-making may vary significantly between tasks and over time. We have previously reported the benefits of using an interface and algorithms that explicitly captured and exploited users' confidence: error rates were reduced by up to 50% for an industrial multi-class learning problem; and the number of interactions required in a design-optimisation context was reduced by 33%. Having access to users' confidence judgements could significantly benefit intelligent interactive systems in industry, in areas such as intelligent tutoring systems and in health care. There are many reasons for wanting to capture information about confidence implicitly. Some are ergonomic, but others are more "social"-such as wishing to understand (and possibly take account of) users' cognitive state without interrupting them.We investigate the hypothesis that users' confidence can be accurately predicted from measurements of their behaviour. Eye-tracking systems were used to capture users' gaze patterns as they undertook a series of visual decision tasks, after each of which they reported their confidence on a 5-point Likert scale. Subsequently, predictive models were built using "conventional" machine learning approaches for numerical summary features derived from users' behaviour. We also investigate the extent to which the deep learning paradigm can reduce the need to design features specific to each application by creating "gaze maps"-visual representations of the trajectories and durations of users' gaze fixations-and then training deep convolu-tional networks on these images.Treating the prediction of user confidence as a two-class problem (confidentot confident), we attained classification accuracy of 88% for the scenario of new users on known tasks, and 87% for known users on new tasks. Considering the confidence as an ordinal variable, we produced regression models with a mean absolute error of ≈0.7 in both cases. Capturing just a simple subset of non-task-specific numerical features gave slightly worse, but still quite high accuracy (e.g., MAE ≈1.0). Results obtained with gaze maps and con-volutional networks are competitive, despite not having access to longer-term information about users and tasks, which was vital for the "summary" feature sets. This suggests that the gaze-map-based approach forms a viable, transferable alternative to handcrafting features for each different application. These results provide significant evidence to confirm our hypothesis, and offer a way of substantially improving many interactive artificial intelligence applications via the addition of cheap non-intrusive hardware and computationally cheap prediction algorithms.
机译:人们并非始终如一的“神谕”:他们对决策的信心在不同的任务之间以及随着时间的流逝可能会有很大的不同。先前我们已经报道了使用一种界面和算法的好处,这些界面和算法可以明确捕捉并利用用户的信心:针对工业多类学习问题的错误率降低了50%;在设计优化的情况下所需的交互次数减少了33%。能够获得用户的信任度判断,可以极大地使行业中的智能交互系统受益,例如智能辅导系统和医疗保健领域。有许多原因想要隐式捕获有关置信度的信息。有些符合人体工程学,而另一些则更“社交”,例如希望了解(并可能考虑到)用户的认知状态而不会打断他们。 r n我们研究了可以从中准确预测用户信心的假设衡量他们的行为。在用户执行一系列视觉决策任务时,使用眼动追踪系统捕获用户的注视模式,然后在每项任务中,他们以5点Likert量表报告其信心。随后,使用“常规”机器学习方法为从用户行为得出的数字摘要特征建立了预测模型。我们还通过创建“注视图”(用户注视轨迹的轨迹和持续时间的视觉表示),然后训练深度卷积,来研究深度学习范式可在多大程度上减少针对每个应用程序设计功能的需求。在这些图像上使用传统网络。 r n将用户信心的预测作为两类问题(自信/不自信)进行处理,对于已知任务的新用户场景,我们的分类精度达到88%,对于已知任务的场景,则达到87%用户执行新任务。将置信度视为一个序数变量,我们在两种情况下均产生了平均绝对误差约为0.7的回归模型。仅捕获非特定于任务的数字特征的简单子集会稍差一些,但仍具有很高的准确性(例如,MAE≈1.0)。尽管无法访问有关用户和任务的长期信息,但使用凝视图和卷积网络获得的结果具有竞争力,这对于“摘要”功能集至关重要。这表明基于凝视图的方法为每个不同的应用程序提供了一种可行的,可转移的替代手工功能。这些结果为证实我们的假设提供了重要的证据,并提供了一种通过添加廉价的非侵入式硬件和计算上廉价的预测算法来显着改善许多交互式人工智能应用程序的方式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号