首页> 外文会议>International Joint Conference on Neural Networks >Confident Interpretations of Black Box Classifiers
【24h】

Confident Interpretations of Black Box Classifiers

机译:黑匣子量词的自信解读

获取原文

摘要

Deep Learning models provide state of the art classification results, but are not human-interpretable. We propose a novel method to interpret the classification results of a black box model a posteriori. We emulate the complex classifier by surrogate decision trees. Each tree mimics the behavior of the complex classifier by overestimating one of the classes. This yields a global, interpretable approximation of the black box classifier. Our method provides interpretations that are at the same time general (applying to many data points), confident (generalizing well to other data points), faithful to the original model (making the same predictions), and simple (easy to understand). Our experiments show that our method beats competing methods in these desiderata, and our user study shows that users prefer this type of interpretations over others.
机译:深度学习模型提供了最先进的分类结果,但不是人类可以解释的。我们提出了一种新的方法来解释黑盒模型的后验分类结果。我们通过代理决策树来模拟复杂分类器。每棵树都通过高估其中一个类来模拟复杂分类器的行为。这将产生一个全局的、可解释的黑盒分类器近似值。我们的方法提供的解释既通用(适用于许多数据点),又自信(能很好地推广到其他数据点),忠实于原始模型(做出相同的预测),而且简单(易于理解)。我们的实验表明,我们的方法在这些需求方面优于竞争方法,我们的用户研究表明,用户更喜欢这种类型的解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号