首页> 外文期刊>Computer speech and language >Variational model for low-resource natural language generation in spoken dialogue systems
【24h】

Variational model for low-resource natural language generation in spoken dialogue systems

机译:口头对话系统中低资源自然语言生成的变分模型

获取原文
获取原文并翻译 | 示例

摘要

Natural Language Generation (NLG) plays a critical role in Spoken Dialogue Systems (SDSs), aims at converting a meaning representation into natural language utterances. Recent deep learning-based generators have shown improving results irrespective of providing sufficient annotated data. Nevertheless, how to build a generator that can effectively utilize as much of knowledge from a low-resource setting data is a crucial issue for NLG in SDSs. This paper presents a variational-based NLG framework to tackle the NLG problem of having limited annotated data in two scenarios, domain adaptation and low-resource in-domain training data. Based on this framework, we propose a novel adversarial domain adaptation NLG taclk-ing the former issue, while the latter issue is also handled by a second proposed dual variational model. We extensively conducted the experiments on four different domains in a variety of training scenarios, in which the experimental results show that the proposed methods not only outperform previous methods when having sufficient training dataset but also show its ability to work acceptably well when there is a small amount of in-domain data or adapt quickly to a new domain with only a low-resource target domain data.
机译:自然语言生成(NLG)在口语对话系统(SDSS)中起着关键作用,旨在将意义代表转换为自然语言话语。近期基于深度学习的发电机已经显示出改善的结果,无论提供足够的注释数据。然而,如何构建能够从低资源设置数据中有效利用的发电机,这是SDS中NLG的重要问题。本文介绍了基于变化的NLG框架,用于解决两个场景,域适应和低资源域训练数据中具有有限的注释数据的NLG问题。基于此框架,我们提出了一种新的对抗域适应NLG TACLK-ing前一个问题,而后者问题也由第二个提议的双变分模式处理。我们在各种培训场景中广泛地对四个不同域进行了实验,其中实验结果表明,该方法不仅在拥有足够的训练数据集时占此突出的方法,而且在有一个小的时候表明它可以很好地工作的能力域中数据的量或仅用低资源目标域数据快速调整到新域。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号