【24h】

Trust in Autonomous Systems for Threat Analysis: A Simulation Methodology

机译:自治系统中的威胁分析信任:一种仿真方法

获取原文

摘要

Human operators will increasingly team with autonomous systems in military and security settings, for example, evaluation and analysis of threats. Determining whether humans are threatening is a particular challenge to which future autonomous systems may contribute. Optimal trust calibration is critical for mission success, but most trust research has addressed conventional automated systems of limited intelligence. This article identifies multiple factors that may influence trust in autonomous systems. Trust may be undermined by various sources of demand and uncertainty. These include the cognitive demands resulting from the complexity and unpredictability of the system, "social" demands resulting from the system's capacity to function as a team-member, and self-regulative demands associated with perceived threats to personal competence. It is proposed that existing gaps in trust research may be addressed using simulation methodologies. A simulated environment developed by the research team is described. It represents a "town-clearing" task in which the human operator teams with a robot that can be equipped with various sensors, and software for intelligent analysis of sensor data. The functionality of the simulator is illustrated, together with future research directions.
机译:在军事和安全环境中,人类操作员将越来越多地与自治系统合作,例如威胁的评估和分析。确定人类是否在威胁是未来自治系统可能会造成的特殊挑战。最佳的信任校准对于任务成功至关重要,但是大多数信任研究都针对智能有限的常规自动化系统。本文确定了可能影响自治系统中信任的多种因素。各种需求和不确定性因素可能会破坏信任。这些包括由于系统的复杂性和不可预测性引起的认知需求,由于系统作为团队成员发挥功能的能力而引起的“社交”需求以及与感知到的个人能力威胁相关的自我调节需求。建议使用模拟方法来解决信任研究中的现有差距。描述了由研究团队开发的模拟环境。它代表着一项“清理城镇”任务,其中操作员与一个机器人一起工作,该机器人可以配备各种传感器以及用于智能分析传感器数据的软件。演示了模拟器的功能以及未来的研究方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号