首页> 外文期刊>ACM transactions on Asian language information processing >μ-Forcing: Training Variational Recurrent Autoencoders for Text Generation
【24h】

μ-Forcing: Training Variational Recurrent Autoencoders for Text Generation

机译:μ强制:训练可变递归自动编码器以生成文本

获取原文
获取原文并翻译 | 示例

摘要

It has been previously observed that training Variational Recurrent Autoencoders (VRAE) for text generation suffers from serious uninformative latent variables problems. The model would collapse into a plain language model that totally ignores the latent variables and can only generate repeating and dull samples. In this article, we explore the reason behind this issue and propose an effective regularizer-based approach to address it. The proposed method directly injects extra constraints on the posteriors of latent variables into the learning process of VRAE, which can flexibly and stably control the tradeoff between the Kullback-Leibler (KL) term and the reconstruction term, making the model learn dense and meaningful latent representations. The experimental results show that the proposed method outperforms several strong baselines and can make the model learn interpretable latent variables and generate diverse meaningful sentences. Furthermore, the proposed method can perform well without using other strategies, such as KL annealing.
机译:以前已经观察到,训练用于文本生成的变分递归自动编码器(VRAE)会遇到严重的无信息的潜在变量问题。该模型将崩溃为简单的语言模型,该模型将完全忽略潜在变量,并且只能生成重复样本和无聊样本。在本文中,我们探讨了此问题背后的原因,并提出了一种有效的基于正则化的方法来解决该问题。该方法将潜在变量后验的额外约束直接注入VRAE的学习过程中,可以灵活,稳定地控制Kullback-Leibler(KL)项与重构项之间的折衷,使模型学习到密集且有意义的潜在表示形式。实验结果表明,所提出的方法优于几种强大的基准,可以使模型学习可解释的潜在变量并生成各种有意义的句子。此外,所提出的方法可以在不使用其他策略(例如KL退火)的情况下很好地执行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号