首页> 外文会议>IEEE International Conference on Data Science and Advanced Analytics >Explaining Explanations: An Overview of Interpretability of Machine Learning
【24h】

Explaining Explanations: An Overview of Interpretability of Machine Learning

机译:解释说明:机器学习的可解释性概述

获取原文

摘要

There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.
机译:最近,解释性人工智能(XAI)的工作激增。该研究领域解决了重要的问题,即复杂的机器和算法通常无法提供对其行为和思维过程的见解。 XAI允许用户和内部系统的各个部分更加透明,从而在某种程度上详细说明了他们的决策。这些解释对于确保算法公平性,识别训练数据中的潜在偏见/问题以及确保算法按预期执行非常重要。但是,这些系统产生的解释既没有标准化也没有系统地评估。为了创建最佳实践并发现未解决的挑战,我们描述了可解释性的基本概念,并展示了如何将其用于对现有文献进行分类。我们讨论了为什么当前的解释方法(尤其是针对深度神经网络)的方法不足。最后,根据我们的调查,我们得出了解释性人工智能的未来建议研究方向。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号