首页> 外文会议>NAFOSTED Conference on Information and Computer Science >Towards domain adaptation for Neural Network Language Generation in Dialogue
【24h】

Towards domain adaptation for Neural Network Language Generation in Dialogue

机译:朝着对话中神经网络语言生成的领域适应

获取原文

摘要

Extending from limited domain to a new domain is crucial for Natural Language Generation in Dialogue, especially when there are sufficient annotated data in the source domain, but there is little labeled data in the target domain. This paper studies the performance and domain adaptation of two different Neural Network Language Generators in Spoken Dialogue Systems: a gating-based Recurrent Neural Network Generator and an extension of an Attentional Encoder-Decoder Generator. We found in model fine-tuning scenario that by separating slot and value parameterizations, the attention-based generators, in comparison to the gating-based generators, show ability to not only prevent semantic repetition in generated outputs and obtain better performance across all domains, but also adapt faster to a new, unseen domain by leveraging existing data. The empirical results show that the attention-based generator can adapt to an open domain when only a limited amount of target domain data is available.
机译:从有限域扩展到新域对对话中的自然语言生成至关重要,尤其是当源域中有足够的注释数据时,目标域中几乎没有标记数据。本文研究了两个不同的神经网络语言发生器在口头对话系统中的性能和域改编:基于门控的经常性神经网络发生器和注意力编码器 - 解码器发生器的扩展。我们在模型精细调整方案中找到,通过分离插槽和价值参数化,与基于门控发电机相比,将注意力的生成器分开,显示不仅可以防止生成的输出中的语义重复,并在所有域中获得更好的性能,但也通过利用现有数据来适应新的,看不见的域。经验结果表明,当仅有有限量的目标域数据时,注意力的发电机可以适应开放式域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号