首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References
【24h】

Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References

机译:训练样本是否相关?学习以多个引用生成对话响应

获取原文

摘要

Due to its potential applications, open-domain dialogue generation has become popular and achieved remarkable progress in recent years, but sometimes suffers from generic responses. Previous models are generally trained based on 1-to-1 mapping from an input query to its response, which actually ignores the nature of 1-to-n mapping in dialogue that there may exist multiple valid responses corresponding to the same query. In this paper, we propose to utilize the multiple references by considering the correlation of different valid responses and modeling the 1 -to-n mapping with a novel two-step generation architecture. The first generation phase extracts the common features of different responses which, combined with distinctive features obtained in the second phase, can generate multiple diverse and appropriate responses. Experimental results show that our proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations.
机译:由于其潜在的应用,开放式对话一代近年来变得流行,取得了显着的进展,但有时遭受通用反应。以前的模型通常基于从输入查询到其响应的1对1映射,这实际上忽略了对话中的1对N映射的性质,即可能存在对应于同一查询的多个有效响应。在本文中,我们建议通过考虑不同有效响应和建模与新的两步生成架构的1 -T-N映射的相关性来利用多个参考。第一代阶段提取不同响应的共同特征,其与第二阶段中获得的独特特征组合,可以产生多种不同的响应。实验结果表明,我们所提出的模型可以有效提高自动和人类评估中现有神经对话模型的响应质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号