首页> 外文期刊>Neurocomputing >A sequence to sequence model for dialogue generation with gated mixture of topics
【24h】

A sequence to sequence model for dialogue generation with gated mixture of topics

机译:对话模型的序列,主题门控混合

获取原文
获取原文并翻译 | 示例
           

摘要

In this paper, we propose GMoT-Seq2Seq, a sequence to sequence (Seq2Seq) model with a gated mixture of topics (MoT) designed to utilize topic information to generate fluent and coherent responses. Seq2Seq model is good at capturing the local structure of word sequence which affects the fluency due to their sequential nature, but probably has difficulty to extract topic information from the utterance. In contrast, topic models are very capable of capturing global semantic information that has a direct impact on the coherence. Absorbing the advantages of both, the proposed GMoT-Seq2Seq model uses a Seq2Seq to capture the temporal dependencies, and an MoT layer to obtain the topic vector that provides global semantic dependencies in the conversation. The MoT layer can summarize the utterances into a proportion vector over several underlying topics. To balance the fluency and coherence, we utilize a topic gate to dynamically control the information from the inferred topic vector and the partially generated responses. Experiment results show that our proposed model outperforms the compared baselines, and can generate more fluent and coherent responses.In this paper, we propose GMoT-Seq2Seq, a sequence to sequence (Seq2Seq) model with a gated mixture of topics (MoT) designed to utilize topic information to generate fluent and coherent responses. Seq2Seq model is good at capturing the local structure of word sequence which affects the fluency due to their sequential nature, but probably has difficulty to extract topic information from the utterance. In contrast, topic models are very capable of capturing global semantic information that has a direct impact on the coherence. Absorbing the advantages of both, the proposed GMoT-Seq2Seq model uses a Seq2Seq to capture the temporal dependencies, and an MoT layer to obtain the topic vector that provides global semantic dependencies in the conversation. The MoT layer can summarize the utterances into a proportion vector over several underlying topics. To balance the fluency and coherence, we utilize a topic gate to dynamically control the information from the inferred topic vector and the partially generated responses. Experiment results show that our proposed model outperforms the compared baselines, and can generate more fluent and coherent responses.CO 2021 Elsevier B.V. All rights reserved.
机译:在本文中,我们提出了Gwot-seq2seq,序列(seq2seq)模型的序列(seq2seq)模型,旨在利用主题信息来产生流利和连贯的反应。 SEQ2Seq模型擅长捕获由于顺序性质而影响流畅性的词序列的局部结构,但可能难以从话语中提取主题信息。相比之下,主题模型非常能够捕获对相干性直接影响的全局语义信息。拟议的Gmot-SEQ2Seq模型吸收两者的优点,使用SEQ2Seq捕获时间依赖性,以及MOT层以获得在对话中提供全局语义依赖关系的主题向量。 MOT层可以将话语总结为几个底层主题的比例矢量。为了平衡流畅性和相干性,我们利用主题门来动态地控制来自推断的主题向量和部分生成的响应的信息。实验结果表明,我们所提出的模型优于比较的基线,可以产生更精通和相干的反应。在本文中,我们提出了GAMOT-SEQ2SEQ,序列(SEQ2Seq)模型,具有设计为的主题(MOT)的门控混合物利用主题信息来生成流利和连贯的响应。 SEQ2Seq模型擅长捕获由于顺序性质而影响流畅性的词序列的局部结构,但可能难以从话语中提取主题信息。相比之下,主题模型非常能够捕获对相干性直接影响的全局语义信息。拟议的Gmot-SEQ2Seq模型吸收两者的优点,使用SEQ2Seq捕获时间依赖性,以及MOT层以获得在对话中提供全局语义依赖关系的主题向量。 MOT层可以将话语总结为几个底层主题的比例矢量。为了平衡流畅性和相干性,我们利用主题门来动态地控制来自推断的主题向量和部分生成的响应的信息。实验结果表明,我们所提出的模型优于比较的基线,并且可以产生更精通和相干的反应.Co 2021 Elsevier B.V.保留所有权利。

著录项

  • 来源
    《Neurocomputing》 |2021年第21期|282-288|共7页
  • 作者单位

    Xi An Jiao Tong Univ Sch Comp Sci & Technol Xian Peoples R China|Xi An Jiao Tong Univ Natl Engn Lab Big Data Analyt Xian Peoples R China;

    Xi An Jiao Tong Univ Sch Comp Sci & Technol Xian Peoples R China|Xi An Jiao Tong Univ Natl Engn Lab Big Data Analyt Xian Peoples R China;

    Southeast Univ Sch Comp Sci & Engn Nanjing Peoples R China;

    Xi An Jiao Tong Univ Natl Engn Lab Big Data Analyt Xian Peoples R China|Xi An Jiao Tong Univ Sch Continuing Educ Xian Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Topic embedding; Dialogue generation; Seq2Seq; Topic gate;

    机译:主题嵌入;对话生成;seq2seq;主题门;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号