首页> 外文会议>International conference on computational linguistics >Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems
【24h】

Adversarial Domain Adaptation for Variational Neural Language Generation in Dialogue Systems

机译:对话系统中可变神经语言生成的对抗域自适应

获取原文

摘要

Domain Adaptation arises when we aim at learning from source domain a model that can perform acceptably well on a different target domain. It is especially crucial for Natural Language Generation (NLG) in Spoken Dialogue Systems when there are sufficient annotated data in the source domain, but there is a limited labeled data in the target domain. How to effectively utilize as much of existing abilities from source domains is a crucial issue in domain adaptation. In this paper, we propose an adversarial training procedure to train a Variational encoder-decoder based language generator via multiple adaptation steps. In this procedure, a model is first trained on a source domain data and then fine-tuned on a small set of target domain utterances under the guidance of two proposed critics. Experimental results show that the proposed method can effectively leverage the existing knowledge in the source domain to adapt to another related domain by using only a small amount of in-domain data.
机译:当我们旨在从源域中学习可以在不同目标域上表现良好的模型时,就会出现域适应。当源域中有足够的带注释的数据,但目标域中的带标记的数据有限时,这对于语音对话系统中的自然语言生成(NLG)尤为重要。如何有效地利用源域中的现有能力是域适应中的关键问题。在本文中,我们提出了一种对抗训练程序,该训练程序通过多个自适应步骤来训练基于变分编码器的语言生成器。在此过程中,首先在源域数据上训练模型,然后在两个提出建议的批评者的指导下对一小套目标域话语进行微调。实验结果表明,该方法仅使用少量的域内数据就可以有效地利用源域中的现有知识来适应另一个相关域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号