首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Disentangled Representation Learning for Non-Parallel Text Style Transfer
【24h】

Disentangled Representation Learning for Non-Parallel Text Style Transfer

机译:非并行文本样式转换的解缠表示学习

获取原文

摘要

This paper tackles the problem of disentangling the latent representations of style and content in language models. We propose a simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space. This disentangled latent representation learning can be applied to style transfer on non-parallel corpora. We achieve high performance in terms of transfer accuracy, content preservation, and language fluency, in comparison to various previous approaches.~1
机译:本文解决了在语言模型中解开样式和内容的潜在表示的难题。我们提出了一种简单而有效的方法,该方法结合了辅助多任务和对抗目标,分别用于样式预测和词袋预测。我们从定性和定量两个方面表明,风格和内容在潜伏空间中确实是纠缠不清的。这种解开的潜在表示学习可以应用于非并行语料库上的样式转换。与以前的各种方法相比,我们在传输准确性,内容保留和语言流利性方面均达到了高性能。〜1

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号