首页> 外文期刊>International Journal of Computer Vision >DRIT plus plus : Diverse Image-to-Image Translation via Disentangled Representations
【24h】

DRIT plus plus : Diverse Image-to-Image Translation via Disentangled Representations

机译:DRIT PLUS PLUS:通过Disonangled表示,不同的图像到图像转换

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: (1) lack of aligned training pairs and (2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse outputs without paired training images. To synthesize diverse outputs, we propose to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and a domain-specific attribute space. Our model takes the encoded content features extracted from a given input and attribute vectors sampled from the attribute space to synthesize diverse outputs at test time. To handle unpaired training data, we introduce a cross-cycle consistency loss based on disentangled representations. Qualitative results show that our model can generate diverse and realistic images on a wide range of tasks without paired training data. For quantitative evaluations, we measure realism with user study and Frechet inception distance, and measure diversity with the perceptual distance metric, Jensen-Shannon divergence, and number of statistically-different bins.
机译:图像到图像转换旨在了解两个视域之间的映射。此任务有两个主要挑战:(1)缺少对齐的训练对和(2)来自单个输入图像的多种可能的输出。在这项工作中,我们提出了一种基于解散表示的方法,用于在没有配对训练图像的情况下生成各种输出。为了合成不同的输出,我们建议将图像嵌入到两个空格上:域中跨越域的共享信息和特定于域的属性空间的域不变内容空间。我们的模型采用从属性空间采样的给定输入和属性向量中提取的编码内容特征,以在测试时间中综合各种输出。为了处理未配对的培训数据,我们基于解散的陈述引入横周期一致性损失。定性结果表明,我们的模型可以在广泛的任务范围内产生多样化和现实的图像,而无需配对训练数据。对于定量评估,我们用用户学习和Freechet成立距离测量现实主义,并测量与感知距离度量,Jensen-Shannon发散和统计上不同的箱数的多样性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号