首页> 外文会议>Annual meeting of the Association for Computational Linguistics >Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation
【24h】

Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation

机译:样式转换器:无成对的潜在文本表示的不成对文本样式传输

获取原文

摘要

Disentangling the content and style in the latent space is prevalent in unpaired text style transfer. However, two major issues exist in most of the current neural models. 1) It is difficult to completely strip the style information from the semantics for a sentence. 2) The recurrent neural network (RNN) based encoder and decoder, mediated by the latent representation, cannot well deal with the issue of the long-term dependency, resulting in poor preservation of non-stylistic semantic content. In this paper, we propose the Style Transformer, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation. Source code will be available on Github.
机译:在不成对的文本样式传输中,很容易在潜在空间中解开内容和样式。但是,在当前大多数神经模型中都存在两个主要问题。 1)很难从句子的语义中完全剥离出样式信息。 2)在潜在表示的介导下,基于递归神经网络(RNN)的编码器和解码器无法很好地解决长期依赖的问题,从而导致非风格语义内容的保存不佳。在本文中,我们提出了样式转换器,它不对源句的潜在表示作任何假设,并在变形器中配备了注意机制的功能,以实现更好的样式传递和更好的内容保存。源代码将在Github上可用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号