首页> 外文期刊>PeerJ Computer Science >To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
【24h】

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

机译:信任或不信任解释:使用叶子评估本地线性XAI方法

获取原文
           

摘要

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.
机译:可解释人工智能(XAI)的主要目标是为黑匣子分类器提供有效的解释。现有文献列出了许多有用的理想属性,但有关如何在实践中定量评估解释的稀缺共识。此外,通常仅用于检查黑盒模型的说明,并且通常忽略了作为决策支持的解释的主动使用。在Xai的许多方法中,广泛采用的范式是局部线性解释 - 与石灰和形状作为最先进的方法。我们表明,这些方法困扰着许多缺陷,包括不稳定的解释,从承诺理论属性的实际实现的分歧,以及错误标签的解释。这突出了XAI领域的局部线性解释需要具有标准和非偏见的评估程序。在本文中,我们解决了识别用于评估本地线性解释的明确和明确的指标的问题。该组包括专门针对这类解释定义的现有和新颖的指标。所有指标都已包含在一个名为Leaf的Python框架中。叶子的目的是为最终用户提供参考,以评估标准化和无偏见的方式的解释,并指导研究人员促进改进的可解释技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号