首页> 外文会议>2017 4th NAFOSTED Conference on Information and Computer Science >Towards domain adaptation for Neural Network Language Generation in Dialogue
【24h】

Towards domain adaptation for Neural Network Language Generation in Dialogue

机译:对话中神经网络语言生成的域适应

获取原文
获取原文并翻译 | 示例

摘要

Extending from limited domain to a new domain is crucial for Natural Language Generation in Dialogue, especially when there are sufficient annotated data in the source domain, but there is little labeled data in the target domain. This paper studies the performance and domain adaptation of two different Neural Network Language Generators in Spoken Dialogue Systems: a gating-based Recurrent Neural Network Generator and an extension of an Attentional Encoder-Decoder Generator. We found in model fine-tuning scenario that by separating slot and value parameterizations, the attention-based generators, in comparison to the gating-based generators, show ability to not only prevent semantic repetition in generated outputs and obtain better performance across all domains, but also adapt faster to a new, unseen domain by leveraging existing data. The empirical results show that the attention-based generator can adapt to an open domain when only a limited amount of target domain data is available.
机译:从有限域扩展到新域对于“对话中的自然语言生成”至关重要,尤其是当源域中有足够的带注释数据,而目标域中的标记数据很少时。本文研究了口语对话系统中两种不同的神经网络语言生成器的性能和领域适应性:基于门控的递归神经网络生成器和注意力编码器/解码器生成器的扩展。我们在模型微调方案中发现,与基于门控的生成器相比,基于注意力的生成器与基于选通的生成器相比,通过分隔槽和值参数化,不仅具有防止生成的输出中出现语义重复并在所有域中获得更好的性能的能力,而且还可以利用现有数据更快地适应新的,看不见的领域。实证结果表明,当只有有限数量的目标域数据可用时,基于注意力的生成器可以适应开放域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号