首页> 外文会议>International joint conference on natural language processing >Answers Unite! Unsupervised Metrics for Reinforced Summarization Models
【24h】

Answers Unite! Unsupervised Metrics for Reinforced Summarization Models

机译:答案联合!加强摘要模型的无监督指标

获取原文
获取外文期刊封面目录资料

摘要

ive summarization approaches based on Reinforcement Learning (RL) have recently been proposed to overcome classical likelihood maximization. RL enables to consider complex, possibly non-differentiable, metrics that globally assess the quality and relevance of the generated outputs. ROUGE, the most used summarization metric, is known to suffer from bias towards lexical similarity as well as from suboptimal accounting for fluency and readability of the generated abstracts. We thus explore and propose alternative evaluation measures: the reported human-evaluation analysis shows that the proposed metrics, based on Question Answering, favorably compares to ROUGE - with the additional property of not requiring reference summaries. Training a RL-based model on these metrics leads to improvements (both in terms of human or automated metrics) over current approaches that use ROUGE as a reward.
机译:最近提出了基于强化学习(RL)的摘要方法来克服古典似然最大化。 RL可以考虑全局评估生成输出的质量和相关性的复杂性,可能是不可差异的度量。胭脂,最常见的总结度量,已知遭受词汇相似性的偏见以及从次优核算生长和生成的摘要的可读性。因此,我们探索并提出替代评估措施:报告的人类评估分析表明,拟议的指标基于问题回答,有利地与Rouge相比 - 与不需要参考摘要的额外财产。在这些指标上培训基于RL的模型导致在使用Rouge作为奖励的当前方法的改进(无论是人类或自动化指标)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号