首页> 外文会议>Conference on empirical methods in natural language processing >Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention
【24h】

Recognizing Implicit Discourse Relations via Repeated Reading: Neural Networks with Multi-Level Attention

机译:通过重复阅读认识到隐式话语关系:具有多级别的神经网络

获取原文

摘要

Recognizing implicit discourse relations is a challenging but important task in the field of Natural Language Processing. For such a complex text processing task, different from previous studies, we argue that it is necessary to repeatedly read the arguments and dynamically exploit the efficient features useful for recognizing discourse relations. To mimic the repeated reading strategy, we propose the neural networks with multi-level attention (NNMA), combining the attention mechanism and external memories to gradually fix the attention on some specific words helpful to judging the discourse relations. Experiments on the PDTB dataset show that our proposed method achieves the state-of-art results. The visualization of the attention weights also illustrates the progress that our model observes the arguments on each level and progressively locates the important words.
机译:识别隐式话语关系是自然语言处理领域的一个具有挑战性,但重要的任务。对于如此复杂的文本处理任务,与以前的研究不同,我们认为有必要反复阅读参数并动态利用有助于识别话语关系的有效功能。为了模仿重复的阅读策略,我们提出了具有多层次的神经网络(NNMA),将注意力机制和外部记忆结合在逐渐地将注意力逐渐地解决了对判断话语关系的一些特定词语。 PDTB数据集的实验表明,我们的提出方法实现了最先进的结果。注意力的可视化也说明了我们模型观察每个级别的参数的进展,并且逐步地定位重要词语。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号