首页> 外文期刊>Computational intelligence and neuroscience >Enhancing Text Generation via Parse Tree Embedding
【24h】

Enhancing Text Generation via Parse Tree Embedding

机译:通过解析树嵌入增强文本生成

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Natural language generation (NLG) is a core component of machine translation, dialogue systems, speech recognition, summarization, and so forth. The existing text generation methods tend to be based on recurrent neural language models (NLMs), which generate sentences from encoding vector. However, most of these models lack explicit structured representation for text generation. In this work, we introduce a new generative model for NLG, called Tree-VAE. First it samples a sentence from the training corpus and then generates a new sentence based on the corresponding parse tree embedding vector. Tree-LSTM is used in collaboration with the Stanford Parser to retrieve sentence construction data, which is then used to train a conditional discretization autoencoder generator based on the embeddings of sentence patterns. The proposed model is extensively evaluated on three different datasets. The experimental results proved that the proposed model can generate substantially more diverse and coherent text than existing baseline methods.
机译:自然语言生成 (NLG) 是机器翻译、对话系统、语音识别、摘要等的核心组件。现有的文本生成方法倾向于基于递归神经语言模型(NLM),该模型从编码向量生成句子。然而,这些模型中的大多数都缺乏用于文本生成的显式结构化表示。在这项工作中,我们引入了一种新的NLG生成模型,称为Tree-VAE。首先,它从训练语料库中采样一个句子,然后根据相应的解析树嵌入向量生成一个新句子。Tree-LSTM 与斯坦福解析器合作用于检索句子结构数据,然后用于训练基于句型嵌入的条件离散化自动编码器生成器。所提出的模型在三个不同的数据集上进行了广泛的评估。实验结果表明,与现有的基线方法相比,所提模型可以生成更加多样化和连贯的文本。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号