首页> 外文会议>International Workshop on Explainable and Transparent AI and Multi-Agent Systems >A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Understandable
【24h】

A Two-Dimensional Explanation Framework to Classify AI as Incomprehensible, Interpretable, or Understandable

机译:二维解释框架,将人工智能分为不可理解、可解释或可理解

获取原文
获取外文期刊封面目录资料

摘要

Because of recent and rapid developments in Artificial Intelligence (AI), humans and Al-systems increasingly work together in human-agent teams. However, in order to effectively leverage the capabilities of both, Al-systems need to be understandable to their human teammates. The branch of explainable AI (XAI) aspires to make Al-systems more understandable to humans, potentially improving human-agent teamwork. Unfortunately, XAI literature suffers from a lack of agreement regarding the definitions of and relations between the four key XAI-concepts: transparency, interpretability, explainability, and understand-ability. Inspired by both XAI and social sciences literature, we present a two-dimensional framework that defines and relates these concepts in a concise and coherent way, yielding a classification of three types of Al-systems: incomprehensible, interpretable, and understandable. We also discuss how the established relationships can be used to guide future research into XAI, and how the framework could be used during the development of Al-systems as part of human-AI teams.
机译:由于人工智能(AI)最近的快速发展,人类和人工智能系统越来越多地在人类代理团队中协同工作。然而,为了有效地利用两者的能力,人工智能系统需要让其人类队友能够理解。可解释人工智能(XAI)的分支致力于使人工智能系统更容易被人类理解,从而有可能改善人类智能体的团队合作。不幸的是,XAI文学在四个关键XAI概念(透明性、可解释性、可解释性和可理解性)的定义和关系方面缺乏一致性。受XAI和社会科学文献的启发,我们提出了一个二维框架,以简洁、连贯的方式定义和关联这些概念,并对三种类型的人工智能系统进行了分类:不可理解、可解释和可理解。我们还讨论了如何使用已建立的关系来指导XAI的未来研究,以及如何在作为人工智能团队一部分的Al系统开发过程中使用该框架。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号