首页> 外文会议>European conference on IR research >Self-attentive Model for Headline Generation
【24h】

Self-attentive Model for Headline Generation

机译:标题生成的自我关注模型

获取原文

摘要

Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.
机译:标题生成是一种特殊的文本摘要任务。尽管用于此任务的可用培训数据量几乎是无限的,但仍具有挑战性,因为学习为新闻报道生成标题表明该模型对自然语言有很强的推理能力。为解决此问题,我们应用了最新的Universal Transformer架构与字节对编码技术配对,并在纽约时报带注释的语料库中使用ROUGE-L F1分数24.84和ROUGE-2 F1获得了最新的技术成果得分13.48。我们还介绍了新的RIA语料库,并在其上达到了ROUGE-L F1得分36.81和ROUGE-2 F1得分22.15。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号