首页> 外文会议>International natural language generation conference >Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models
【24h】

Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models

机译:通过自然语言生成和自然语言理解模型的联合学习进行半监督神经文本生成

获取原文

摘要

In Natural Language Generation (NLG), End-to-End (E2E) systems trained through deep learning have recently gained a strong interest. Such deep models need a large amount of carefully annotated data to reach satisfactory performance. However, acquiring such datasets for every new NLG application is a tedious and time-consuming task. In this paper, we propose a semi-supervised deep learning scheme that can learn from non-annotated data and annotated data when available. It uses an NLG and a Natural Language Understanding (NLU) sequence-to-sequence models which are learned jointly to compensate for the lack of annotation. Experiments on two benchmark datasets show that, with limited amount of annotated data, the method can achieve very competitive results while not using any preprocessing or re-scoring tricks. These findings open the way to the exploitation of non-annotated datasets which is the current bottleneck for the E2E NLG system development to new applications.
机译:在自然语言生成(NLG)中,通过深度学习训练的端到端(E2E)系统最近引起了人们的浓厚兴趣。这样的深度模型需要大量经过仔细注释的数据才能达到令人满意的性能。但是,为每个新的NLG应用程序获取此类数据集都是一项繁琐且耗时的任务。在本文中,我们提出了一种半监督式深度学习方案,该方案可以从未注释的数据和可用的注释数据中学习。它使用NLG和自然语言理解(NLU)序列到序列模型,这些模型可以共同学习以弥补注释的不足。在两个基准数据集上进行的实验表明,在注释数据量有限的情况下,该方法可以获得非常有竞争力的结果,而无需使用任何预处理或重新评分技巧。这些发现为开发非注释数据集开辟了道路,这是E2E NLG系统开发新应用程序的当前瓶颈。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号