首页> 外文会议>Conference on empirical methods in natural language processing >LISA: Explaining Recurrent Neural Network Judgments via Layer-wise Semantic Accumulation and Example to Pattern Transformation
【24h】

LISA: Explaining Recurrent Neural Network Judgments via Layer-wise Semantic Accumulation and Example to Pattern Transformation

机译:丽莎:通过层面的语义积累解释经常性的神经网络判断,并进行模式转换

获取原文

摘要

Recurrent neural networks (RNNs) are temporal networks and cumulative in nature that have shown promising results in various natural language processing tasks. Despite their success, it still remains a challenge to understand their hidden behavior. In this work, we analyze and interpret the cumulative nature of RNN via a proposed technique named as Layer-wIse-Semantic-Accumulation (LISA) for explaining decisions and detecting the most likely (i.e., saliency) patterns that the network relies on while decision making. We demonstrate (1) LISA: "How an RNN accumulates or builds semantics during its sequential processing for a given text example and expected response" (2) Example2pattern: "How the saliency patterns look like for each category in the data according to the network in decision making". We analyse the sensitiveness of RNNs about different inputs to check the increase or decrease in prediction scores and further extract the saliency patterns learned by the network. We employ two relation classification datasets: SemEval 10 Task 8 and TAC KBP Slot Filling to explain RNN predictions via the LISA and example2pattern.
机译:经常性的神经网络(RNN)是时间网络和性质中的累积,这些性质已经显示出各种自然语言处理任务的有希望的结果。尽管取得了成功,但了解隐藏行为仍然是一个挑战。在这项工作中,我们通过被命名为层性语义累积(LISA)的建议技术来分析和解释RNN的累积性质,用于解释网络依赖于决策的最有可能(即显着性)模式制作。我们演示(1)LISA:“在给定文本示例和预期响应的顺序处理期间,RNN如何累积或构建语义”(2)示例2pattern:“根据网络的数据中的每个类别如何看起来在决策中“。我们分析了对不同输入的RNN的敏感性,以检查预测分数的增加或减少,进一步提取网络学到的显着模式。我们雇用了两个关系分类数据集:Semeval 10任务8和TAC KBP插槽填充以通过LISA和Example2Pattern解释RNN预测。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号