首页> 外文期刊>International Journal of Neural Systems >Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods
【24h】

Middle-Level Features for the Explanation of Classification Systems by Sparse Dictionary Methods

机译:通过稀疏字典方法解释分类系统的中级功能

获取原文
获取原文并翻译 | 示例
       

摘要

Machine learning (ML) systems are affected by a pervasive lack of transparency. The eXplainable Artificial Intelligence (XAI) research area addresses this problem and the related issue of explaining the behavior of ML systems in terms that are understandable to human beings. In many explanation of XAI approaches, the output of ML systems are explained in terms of low-level features of their inputs. However, these approaches leave a substantive explanatory burden with human users, insofar as the latter are required to map low-level properties into more salient and readily understandable parts of the input. To alleviate this cognitive burden, an alternative model-agnostic framework is proposed here. This framework is instantiated to address explanation problems in the context of ML image classification systems, without relying on pixel relevance maps and other low-level features of the input. More specifically, one obtains sets of middle-level properties of classification inputs that are perceptually salient by applying sparse dictionary learning techniques. These middle-level properties are used as building blocks for explanations of image classifications. The achieved explanations are parsimonious, for their reliance on a limited set of middle-level image properties. And they can be contrastive, because the set of middle-level image properties can be used to explain why the system advanced the proposed classification over other antagonist classifications. In view of its model-agnostic character, the proposed framework is adaptable to a variety of other ML systems and explanation problems.
机译:机器学习(ML)系统受普遍缺乏透明度的影响。可解释的人工智能(XAI)研究领域解决了这个问题,并以对人类可以理解的术语解释ML系统的行为的相关问题。在XAI方法的许多解释中,在其输入的低级别特征方面解释了ML系统的输出。然而,这些方法留下了与人类用户的实质性解释性负担,因为后者需要将低级性能映射到更加突出的和易于理解的输入部分。为了减轻这种认知负担,这里提出了一种替代的模型 - 不可知框架。本框架实例化以解决ML图像分类系统的上下文中的解释问题,而不依赖于输入的像素相关图和其他低级功能。更具体地,通过应用稀疏字典学习技术获得感知突出的分类输入的中间层次属性集。这些中级属性用作图像分类的解释的构建块。取得的解释是解开的,因为他们对一组有限的中级图像属性依赖。它们可以是对比的,因为这组中级图像属性可用于解释为什么系统高级拟议的分类对其他对手分类。鉴于其模型 - 不可知的特征,所提出的框架适用于各种其他ML系统和解释问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号