首页> 外文会议>International conference on computational linguistics >Generating Reasonable and Diversified Story Ending Using Sequence to Sequence Model with Adversarial Training
【24h】

Generating Reasonable and Diversified Story Ending Using Sequence to Sequence Model with Adversarial Training

机译:生成合理和多样化的故事,以序列与对抗培训进行序列模型结尾

获取原文

摘要

Story generation is a challenging problem in artificial intelligence (AI) and has received a lot of interests in the natural language processing (NLP) community. Most previous work tried to solve this problem using Sequence to Sequence (Seq2Seq) model trained with Maximum Likelihood Estimation (MLE). However, the pure MLH training objective much limits the power of Scq2Scq model in generating high-quality stories. In this paper, we propose using adversarial training augmented Seq2Scq model to generate reasonable and diversified story endings given a story context. Our model includes a generator that defines the policy of generating a story ending, and a discriminator that labels story endings as human-generated or machine-generated. Carefully designed human and automatic evaluation metrics demonstrate that our adversarial training augmented Seq2Seq model can generate more reasonable and diversified story endings compared to purely MLE-trained Seq2Seq model. Moreover, our model achieves better performance on the task of Story Cloze Test with an accuracy of 62.6% compared with state-of-the-art baseline methods.
机译:故事一代是人工智能(AI)有挑战性的问题,并在自然语言处理(NLP)社区中获得了很多兴趣。最先前的工作试图使用序列(SEQ2Seq)模型的序列来解决此问题,该模型具有最大似然估计(MLE)。然而,纯MLH培训目的很大限度地限制了SCQ2SCQ模型在产生高质量故事中的力量。在本文中,我们建议使用对抗性培训增强的SEQ2SCQ模型,以产生一个故事背景的合理和多样化的故事结局。我们的模型包括一个发电机,它定义生成故事结束的策略,以及标签故事结束作为人生成或机器生成的判别者。精心设计的人类和自动评估指标表明,与纯粹的MLE培训的SEQ2SEQ模型相比,我们的对抗训练增强了SEQ2SEQ模型可以产生更合理和多样化的故事结局。此外,我们的模型在故事Toze测试的任务方面实现了更好的性能,精度为62.6%,而最先进的基线方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号