首页> 外文会议>Workshop on new frontiers in summarization 2017 >Coarse-to-Fine Attention Models for Document Summarization
【24h】

Coarse-to-Fine Attention Models for Document Summarization

机译:用于文档摘要的粗到精注意模型

获取原文
获取原文并翻译 | 示例

摘要

Sequence-to-sequence models with attention have been successful for a variety of NLP problems, but their speed does not scale well for tasks with long source sequences such as document summarization. We propose a novel coarse-to-fine attention model that hierarchically reads a document, using coarse attention to select top-level chunks of text and fine attention to read the words of the chosen chunks. While the computation for training standard attention models scales linearly with source sequence length, our method scales with the number of top-level chunks and can handle much longer sequences. Empirically, we find that while coarse-to-fine attention models lag behind state-of-the-art baselines, our method achieves the desired behavior of sparsely attending to subsets of the document for generation.
机译:引人注意的序列到序列模型已经成功地解决了许多NLP问题,但是它们的速度不能很好地适应具有较长源序列的任务,例如文档摘要。我们提出了一种新颖的从粗到精的注意力模型,该模型可按层次读取文档,使用粗略的注意力选择文本的顶级块,而精细的注意力则读取所选块的单词。虽然训练标准注意力模型的计算与源序列长度成线性比例,但是我们的方法与顶级块的数量成比例,并且可以处理更长的序列。从经验上看,我们发现虽然粗略注意模型落后于最新基准,但是我们的方法实现了稀疏地关注文档子集以生成的期望行为。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号