首页> 外文会议>International Conference on Applications of Natural Language to Information Systems >A Hierarchical Iterative Attention Model for Machine Comprehension
【24h】

A Hierarchical Iterative Attention Model for Machine Comprehension

机译:机器理解的分层迭代关注模型

获取原文

摘要

Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of Natural Language Processing, so reading comprehension of text is an important problem in NLP research. In this paper, we propose a novel Hierarchical Iterative Attention model (HIA), which constructs iterative alternating attention mechanism over tree-structured rather than sequential representations. The proposed HIA model continually refines its view of the query and document while aggregating the information required to answer a query, aiming to compute the attentions not only for the document but also the query side, which will benefit from the mutual information. Experimental results show that HIA has achieved significant state-of-the-art performance in public English datasets, such as CNN and Childrens Book Test datasets. Furthermore, HIA also outperforms state-of-the-art systems by a large margin in Chinese datasets, including People Daily and Childrens Fairy Tale datasets, which are recently released and the first Chinese reading comprehension datasets.
机译:使计算机能够理解文档,以便它可以应答理解问题是自然语言处理的中央,但未解决的目标,因此阅读文本的阅读理解是NLP研究中的一个重要问题。在本文中,我们提出了一种新的分层迭代注意模型(HIA),其在树结构而不是顺序表示上构造了迭代交替的关注机制。建议的HIA模型不断地改进其对查询和文档的视图,同时汇总了回答查询所需的信息,旨在不仅为文档计算注意事项,而且旨在为查询方面进行计算,这将受益于相互信息。实验结果表明,HIA在公共英语数据集中取得了显着的最新性能,例如CNN和儿童书籍测试数据集。此外,HIA还优于中国数据集中的大型余量,包括日常和儿童童话数据集,该数据集最近发布和第一个中文阅读理解数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号