首页> 外文会议>Annual conference on theory and applications of models of computation >Cognitive Reasoning and Trust in Human-Robot Interactions
【24h】

Cognitive Reasoning and Trust in Human-Robot Interactions

机译:人机交互中的认知推理和信任

获取原文

摘要

We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interactions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as "agent A has 99% trust in agent B's ability or willingness to perform a task" and the role it can play in explaining trust-based decisions and agent's dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reasoning about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate such trust in human-robot interactions. The paper concludes by summarising recent advances and future challenges for modelling and verification in this important field.
机译:我们目睹了自动驾驶系统技术进步的加速发展,其中无人驾驶汽车和家用辅助机器人就是其中的突出例子。随着移动自主权逐渐融入我们的社会,我们越来越多地依赖于移动自主机器人做出的决策,并与他们进行社交互动。需要询问的关键问题是如何确保此类交互的安全性和信任性。我们如何知道何时信任机器人?我们应该信任多少?机器人应该信任我们多少?本文将概述在人类或机器人代理之间表达信任的概率逻辑,例如“代理A对代理B的能力或执行意愿的信任度为99%”,以及它在解释基于信任的决策中所起的作用和特工彼此依赖。该逻辑基于概率的信念概念,支持有关目标和意图的认知推理,并允许通过模型检查进行定量验证,该模型检查可用于评估对人机交互的这种信任。本文总结了在这一重要领域中建模和验证的最新进展和未来挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号