首页> 外文会议>International conference on computational linguistics >Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems
【24h】

Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems

机译:对话系统中变分神经语言生成的对抗域适应

获取原文

摘要

Domain Adaptation arises when we aim at learning from source domain a model that can perform acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effectively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
机译:当我们瞄准从源域中学习的模型时,域适应会出现,该模型可以在不同的目标域上可接受地执行。当源域中有足够的注释数据时,在口语对话系统中对自然语言生成(NLG)特别重要,但目标域中存在有限的标记数据。如何有效地利用来自源域的现有能力是域适应的重要问题。在本文中,我们提出了一种通过多个适应步骤训练基于变化的编码器 - 解码器的语言发生器的普遍培训程序。在此过程中,首先在源域数据上训练模型,然后在两个提出批评者的指导下进行微调在一小组目标领域话语上。实验结果表明,该方法可以有效利用源域中的现有知识,以通过仅使用少量域数据来适应另一个相关域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号