首页> 外文学位 >Modeling Trust to Improve Human-Robot Interaction.
【24h】

Modeling Trust to Improve Human-Robot Interaction.

机译:建立信任模型以改善人机交互。

获取原文
获取原文并翻译 | 示例

摘要

Throughout the history of automation, there have been numerous accidents attributed to the inappropriate use of automated systems. Over-reliance and under-reliance on automation are well documented problems in fields where automation has been employed for a long time, such as factories and aviation. Research has shown that one of the key factors that influence an operator's reliance on automated systems is his or her trust of the system. Several factors, including risk, workload, and task difficulty have been found to influence an operator's trust of an automated system. With a model of trust based upon these factors, it is possible to design automated systems that foster well-calibrated trust and thereby prevent the misuse of automation.;Over the past decade, robot systems have become more commonplace and increasingly autonomous. With the increased use of robot systems in multiple application domains, models of trust and operator behavior for human-robot interaction (HRI) must be created now in order to avoid some of the problems encountered by other automation domains in the past. Since traditional automation domains and HRI are significantly different, we reexamine trust and control allocation (operator's usage of autonomous behaviors) as it relates to robots with autonomous capabilities in order to discover the relevant factors in HRI.;This dissertation examines existing work in traditional automation that is relevant to HRI and, based on that information, builds an experimental methodology to closely mimic real world remote robot teleoperation tasks. We also present results from multiple experiments examining the relationship between the different factors being investigated with respect to trust and control allocation. Based on these results, a model for human interaction with remote robots for teleoperation (HARRT) is proposed and design guidelines to help improve overall performance are presented based on the model.
机译:在整个自动化历史中,由于不当使用自动化系统而发生了许多事故。在自动化已长期应用的领域(例如工厂和航空领域),过度依赖和不完全依赖自动化是有据可查的问题。研究表明,影响操作员对自动化系统依赖的关键因素之一是他或她对系统的信任。已经发现包括风险,工作量和任务难度在内的几个因素会影响操作员对自动化系统的信任。通过基于这些因素的信任模型,可以设计自动化系统,以增强良好的信任度,从而防止滥用自动化。在过去的十年中,机器人系统变得越来越普遍和自治。随着越来越多的机器人系统在多个应用领域中的使用,现在必须创建人机交互(HRI)的信任和操作员行为模型,以避免过去其他自动化领域遇到的一些问题。由于传统的自动化领域和HRI有显着不同,因此我们重新研究信任和控制分配(操作员对自主行为​​的使用),因为它涉及具有自主能力的机器人,以便发现HRI中的相关因素。与HRI相关的信息,并根据该信息建立一种实验方法,以紧密模拟现实世界中的远程机器人遥操作任务。我们还提供了来自多个实验的结果,这些实验检查了关于信任和控制分配的不同因素之间的关系。基于这些结果,提出了一种用于与远程机器人进行远程操作的人机交互模型(HARRT),并基于该模型提出了有助于提高整体性能的设计指南。

著录项

  • 作者

    Desai, Munjal.;

  • 作者单位

    University of Massachusetts Lowell.;

  • 授予单位 University of Massachusetts Lowell.;
  • 学科 Engineering Robotics.;Computer Science.
  • 学位 Ph.D.
  • 年度 2012
  • 页码 231 p.
  • 总页数 231
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号