首页> 外文会议>International Conference on Mobile Ad Hoc and Sensor Systems >Using Graphical Models as Explanations in Deep Neural Networks
【24h】

Using Graphical Models as Explanations in Deep Neural Networks

机译:使用图形模型作为深神经网络中的解释

获取原文

摘要

Despite its remarkable success, deep learning currently typically operates as a black-box. Instead, can models produce explicit reasons to explain their decisions? To address that question, we propose to exploit probabilistic graphical models which are declarative representations of our understanding of the world (e.g., what the relevant variables are, and how they interact with each other), and are commonly used to perform causal inference. More specifically, we propose a novel architecture called Deep Explainable Bayesian Networks whose main idea consists in concatenating a deep network with a Bayesian network, and to rely on the latter one to provide the explanations. We conduct extensive experiments on classical image, and text classification tasks. First, the results show that deep explainable Bayesian networks can achieve comparable accuracy than models that are trained on the same datasets but without producing explanations. Second, the experiments show promising results: The average accuracy of the explanation ranges from 68.3% to 84.8%.
机译:尽管成功显着,但深度学习目前通常是一个黑匣子。相反,模型可以产生明确的理由来解释他们的决定?为了解决这个问题,我们建议利用概率图形模型,这些图形模型是我们对世界的理解的声明性表示(例如,相关变量是什么以及它们如何相互交互),并且通常用于执行因果推断。更具体地说,我们提出了一种名为深度可解释的贝叶斯网络的新建筑,其主要思想由贝叶斯网络连接到深度网络,并依赖后者提供解释。我们对古典图像进行广泛的实验,以及文本分类任务。首先,结果表明,深度可解释的贝叶斯网络可以达到比在同一数据集上培训但不产生解释的模型来实现可比的准确性。其次,实验表明了有希望的结果:解释的平均准确性范围为68.3%至84.8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号