首页> 外文期刊>Frontiers in Psychology >Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design
【24h】

Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design

机译:自动化信任属性的个人差异:对系统设计的测量和应用

获取原文
           

摘要

Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences.
机译:基于计算机的传感,分析,记忆,决策和控制在工业,商业,医疗,科学和军事应用中的控制变得越来越复杂,采用了学习,模式识别和计算的各种人工智能技术。研究表明,适当使用自动化高度依赖于操作员信任。结果,信托主题已成为人类因素和人类系统集成的应用学科的研究和讨论的积极主题。虽然各种论文指出了影响信任的许多因素,但目前存在对信任的不一致定义。本文综述了以前的对自动化信任的研究,重点是其含义和因素确定信托和自动化可信度的主观评估(有时但并不总是被视为自动化的客观可测量的财产)。该论文断言,通常与人道道德相关的某些属性可以利用基于计算机的自动化应用,因为它变得更加智能,更响应于其人类用户。该文件旨在建议,自动化基于自己对用户的经验,可以开发互惠属性,这些属性表现了对用户的自身信任并相应地适应。这种情况可以作为正式的游戏建模,其中每个自动化用户和自动化(计算机)根据公用事业的支付矩阵(福利和成本)彼此接合。虽然这是一个缺乏经验数据的概念文件,但它提供假设,通过哪些研究人员可以在自动化信任的详细属性中测试各个差异,并确定调整自动化设计的标准,以最佳适应这些用户差异。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号