【24h】

Long and Diverse Text Generation with Planning-based Hierarchical Variational Model

机译:基于计划的分层变体模型的长文本文本生成

获取原文

摘要

Existing neural methods for data-to-text generation are still struggling to produce long and diverse texts: they are insufficient to model input data dynamically during generation, to capture inter-sentence coherence, or to generate diversified expressions. To address these issues, we propose a Planning-based Hierarchical Variational Model (PHVM). Our model first plans a sequence of groups (each group is a subset of input items to be covered by a sentence) and then realizes each sentence conditioned on the planning result and the previously generated context, thereby decomposing long text generation into dependent sentence generation sub-tasks. To capture expression diversity, we devise a hierarchical latent structure where a global planning latent variable models the diversity of reasonable planning and a sequence of local latent variables controls sentence realization. Experiments show that our model outperforms state-of-the-art baselines in long and diverse text generation.
机译:现有的用于数据到文本生成的神经方法仍在努力生成冗长而多样的文本:它们不足以在生成过程中动态地对输入数据进行建模,捕获句子间的连贯性或生成多样化的表达式。为了解决这些问题,我们提出了一种基于计划的层次变异模型(PHVM)。我们的模型首先计划一组序列(每个组是一个句子要覆盖的输入项的子集),然后根据计划结果和先前生成的上下文来实现每个句子,从而将长文本生成分解为从属的句子生成子-任务。为了捕获表达的多样性,我们设计了一个分层的潜在结构,其中全局计划潜在变量对合理计划的多样性进行建模,而局部潜在变量的序列控制句子的实现。实验表明,在冗长而多样的文本生成过程中,我们的模型优于最新的基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号