首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References
【24h】

Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References

机译:培训样本是否相关?学习生成具有多个引用的对话响应

获取原文

摘要

Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-to-n mapping in dialogue that there may exist multiple valid responses corresponding to the same query. In this paper, we propose to utilize the multiple references by considering the correlation of different valid responses and modeling the 1 -to-n mapping with a novel two-step generation architecture. The first generation phase extracts the common features of different responses which, combined with distinctive features obtained in the second phase, can generate multiple diverse and appropriate responses. Experimental results show that our proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations.
机译:由于其潜在的应用,开放域对话生成近年来变得流行并取得了显着进展,但有时会受到一般性响应的困扰。通常基于从输入查询到其响应的一对一映射来训练先前的模型,该模型实际上忽略了对话中的一对一映射的性质,即可能存在与同一查询相对应的多个有效响应。在本文中,我们建议通过考虑不同有效响应的相关性并利用新颖的两步生成体系结构对1-to-n映射进行建模来利用多个参考。第一代阶段提取不同响应的共同特征,再结合第二阶段中获得的独特特征,可以生成多种多样且适当的响应。实验结果表明,我们提出的模型可以有效地提高响应质量,并且在自动评估和人工评估方面都优于现有的神经对话模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号