首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures
【24h】

Explaining non-linear Classifier Decisions within Kernel-based Deep Architectures

机译:在基于内核的深度架构中解释非线性分类器决策

获取原文
获取原文并翻译 | 示例

摘要

Nonlinear methods such as deep neural networks achieve state-of-the-art performances in several semantic NLP tasks. However episte-mologically transparent decisions are not provided as for the limited interpretability of the underlying acquired neural models. In neural-based semantic inference tasks epistemologi-cal transparency corresponds to the ability of tracing back causal connections between the linguistic properties of a input instance and the produced classification output.In this paper, we propose the use of a methodology, called Layerwise Relevance Propagation, over linguistically motivated neural architectures, namely Kernel-based Deep Architectures (KDA), to guide argumentations and explanation inferences. In such a way, each decision provided by a KDA can be linked to real examples, linguistically related to the input instance: these can be used to motivate the network output. Quantitative analysis shows that richer explanations about the semantic and syntagmatic structures of the examples characterize more convincing arguments in two tasks, i.e. question classification and semantic role labeling.
机译:诸如深度神经网络之类的非线性方法在一些语义NLP任务中实现了最新的性能。然而,由于潜在的获得性神经模型的可解释性有限,因此没有提供认识论上透明的决定。在基于神经的语义推理任务中,认识上的透明性对应于追溯输入实例的语言属性与产生的分类输出之间的因果关系的能力。在本文中,我们建议使用一种称为分层相关传播的方法,基于语言的神经体系结构,即基于内核的深度体系结构(KDA),以指导论点和解释推论。这样,KDA提供的每个决策都可以链接到与输入实例在语言上相关的真实示例:这些可以用于激发网络输出。定量分析表明,关于示例的语义和语法结构的更丰富的解释在两个任务(即问题分类和语义角色标记)中表征了更具说服力的论点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号