首页> 外文会议>1st EMNLP workshop blackboxNLP: analyzing and interpreting neural networks for NLP 2018 >Iterative Recursive Attention Model for Interpretable Sequence Classification
【24h】

Iterative Recursive Attention Model for Interpretable Sequence Classification

机译:可解释序列分类的迭代递归注意模型

获取原文
获取原文并翻译 | 示例

摘要

Natural language processing has greatly benefited from the introduction of the attention mechanism. However, standard attention models are of limited interpretability for tasks that involve a series of inference steps. We describe an iterative recursive attention model, which constructs incremental representations of input data through reusing results of previously computed queries. We train our model on sentiment classification datasets and demonstrate its capacity to identify and combine different aspects of the input in an easily interpretable manner, while obtaining performance close to the state of the art.
机译:自然语言处理极大地受益于注意力机制的引入。但是,标准注意力模型对涉及一系列推理步骤的任务的解释性有限。我们描述了一种迭代递归注意模型,该模型通过重用先前计算的查询的结果来构造输入数据的增量表示。我们在情感分类数据集上训练我们的模型,并展示其以易于理解的方式识别和组合输入的不同方面的能力,同时获得与最新技术水平相近的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号