首页> 外文会议>International Conference on Computer Supported Cooperative Work in Design >Incorporating Background Knowledge into Dialogue Generation Using Multi-task Transformer Learning
【24h】

Incorporating Background Knowledge into Dialogue Generation Using Multi-task Transformer Learning

机译:使用多任务变压器学习将背景知识纳入对话生成中

获取原文

摘要

Knowledge plays a very important role in the dialogue systems. Inspired by how humans use unstructured background knowledge in the conversations, this paper proposes a dialogue generation model based on multi-task learning. The model divides the conversation generation task into two tasks, a knowledge selection task and a response prediction task, which are regard as a reading comprehension task and a text generation task separately. Specifically, in the task of knowledge selection, a language pre-training model Bidirectional Encoder Representations from Transformers (BERT) is applied to solve the problem of selecting the knowledge from the background knowledge documents in the current context. And in the task of response prediction, a transformer version of pointer-generator network, being composed of an encoder using the shared BERT mentioned in knowledge selection and a decoder using the left-context-only transformer, is applied to copy tokens from the background knowledge via pointing and produce tokens in the vocabulary through a generator. Our experiments on the HOLL-E dataset show that our model achieves better results than the strong baseline models and the related recent work.
机译:知识在对话系统中起着非常重要的作用。这篇论文提出了一种基于多任务学习的对话生成模型的人类在对话中使用非结构化背景知识。该模型将对话生成任务划分为两个任务,知识选择任务和响应预测任务,这是作为读取理解任务和文本生成任务的视为读取理解任务。具体地,在知识选择的任务中,应用来自变换器(BERT)的语言预训练模型的双向编码器表示来解决当前上下文中的背景知识文档中选择知识的问题。在响应预测的任务中,使用知识选择中提到的共享BERT和使用左上文变压器的解码器由编码器组成的指针发生器版本的变压器版本应用于从后台复制令牌通过发电机指向和在词汇中指向和生产令牌的知识。我们在HOLL-E DataSet上的实验表明,我们的模型比强大的基线模型和相关最近的工作实现了更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号