首页> 外文会议>Asilomar Conference on Signals, Systems, and Computers >Treeview and Disentangled Representations for Explaining Deep Neural Networks Decisions
【24h】

Treeview and Disentangled Representations for Explaining Deep Neural Networks Decisions

机译:用于解释深度神经网络决策的树景和解散表示

获取原文

摘要

With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Many popular approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior [11]. In this paper, we take the view that the complexity of the explanations should correlate with complexity of the decision. We propose to build a Treeview representation of the complex model using disentangled representations, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.
机译:随着高度预测性但不透明的深度学习模式的出现,它比以往任何时候都变得更加重要,并解释了这些模型的预测。 许多流行的方法将可解释性定义为复杂性的倒数,并以准确性成本实现可解释性。 这引入了产生可解释但误导性解释的风险。 作为人类,我们容易发生这种行为[11]。 在本文中,我们认为解释的复杂性应与决策的复杂性相关。 我们建议使用Disonangled表示构建复杂模型的TreeView表示,这揭示了在预测正确的关联之前对不太可能的类标签的迭代拒绝。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号