首页> 外文会议>International Conference on Natural Language Processing and Chinese Computing >Will Repeated Reading Benefit Natural Language Understanding?
【24h】

Will Repeated Reading Benefit Natural Language Understanding?

机译:将重复阅读利益自然语言理解吗?

获取原文

摘要

Repeated Reading (re-read), which means to read a sentence twice to get a better understanding, has been applied to machine reading tasks. But there have not been rigorous evaluations showing its exact contribution to natural language processing. In this paper, we design four tasks, each representing a different class of NLP tasks: (1) part-of-speech tagging, (2) sentiment analysis, (3) semantic relation classification, (4) event extraction. We take a bidirectional LSTM-RNN architecture as standard model for these tasks. Based on the standard model, we add repeated reading mechanism to make the model better "understand" the current sentence by reading itself twice. We compare three different repeated reading architectures: (1) Multi-level attention (2) Deep BiL-STM (3) Multi-pass BiLSTM, enforcing apples-to-apples comparison as much as possible. Our goal is to understand better in what situation repeated reading mechanism can help NLP task, and which of the three repeated reading architectures is more appropriate to repeated reading. We find that repeated reading mechanism do improve performance on some tasks (sentiment analysis, semantic relation classification, event extraction) but not on others (POS tagging). We discuss how these differences may be caused in each of the tasks. Then we give some suggestions for researchers to follow when choosing whether to use repeated model and which repeated model to use when faced with a new task. Our results thus shed light on the usage of repeated reading in NLP tasks.
机译:重复阅读(重新读取),这意味着两次读取句子以获得更好的理解,已应用于机器读取任务。但是,尚未严谨的评估表明其对自然语言处理的确切贡献。在本文中,我们设计了四个任务,每个任务代表不同类别的NLP任务:(1)语音标记部分,(2)情绪分析,(3)语义关系分类,(4)事件提取。我们使用双向LSTM-RNN架构作为这些任务的标准型号。基于标准模型,我们添加了重复的读取机制,使模型更好地通过读取两次读取当前句子。我们比较三个不同的重复阅读架构:(1)多级注意(2)深度BIL-STM(3)多通BILSTM,尽可能强制苹果苹果比较。我们的目标是在重复阅读机制可以帮助NLP任务的情况下了解更好,并且三个重复阅读架构中的哪一个更适合重复阅读。我们发现重复的阅读机制确实改善了一些任务(情绪分析,语义关系分类,事件提取)的性能,但不在其他任务(POS标记)上。我们讨论如何在每个任务中引起这些差异。然后,我们为研究人员提供一些建议,在选择是否使用重复模型以及在面对新任务时使用重复模型。我们的结果阐明了NLP任务中重复读数的使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号