首页> 外文会议>Human Factors and Ergonomics Society Annual Meeting >The National Academies Board on Human System Integration (BOHSI) Panel: Explainable AI, System Transparency, and Human Machine Teaming
【24h】

The National Academies Board on Human System Integration (BOHSI) Panel: Explainable AI, System Transparency, and Human Machine Teaming

机译:国家院校人员系列委员会(Bohsi)小组:可解释AI,系统透明度和人机组合

获取原文
获取外文期刊封面目录资料

摘要

The National Academies Board on Human Systems Integration (BOHSI) has organized this session exploring the state of the art and research and design frontiers for intelligent systems that support effective human machine teaming. An important element in the success of human machine teaming is the ability of the person on the scene to develop appropriate trust in the automated software (including recognizing when it should not be trusted). Research is being conducted in the Human Factors community and the Artificial Intelligence (AI) community on the characteristics that software need to display in order to foster appropriate trust. For example, there is a DARPA program on Explainable AI (XAI). The Panel brings together prominent researchers from both the Human Factors and AI communities to discuss the current state of the art, challenges and short-falls and ways forward in developing systems that engender appropriate trust.
机译:全国院校人类系统集成(Bohsi)组织了本届会议,探索了支持有效人机组合的智能系统的艺术和研究和设计前沿。人机团队成功的一个重要元素是人们在现场的能力,在自动化软件中制定适当的信任(包括识别它不应该信任)。在人类因素界和人工智能(AI)社区中正在进行研究,以便软件展示的特征,以促进适当的信任。例如,有一个可解释的AI(XAI)的DARPA程序。该小组将来自人类因素和AI社区的突出研究人员汇集在一起​​,讨论当前的艺术状态,挑战和短缺以及开发机构的发展方向的方式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号