首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation
【24h】

Mix and Match Networks: Encoder-Decoder Alignment for Zero-Pair Image Translation

机译:混合和匹配网络:零对图像转换的编码器 - 解码器对齐

获取原文

摘要

We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models.
机译:我们解决了没有可用的直接配对数据的域或模式之间的图像转换问题(即零对翻译)。我们提出了混合和匹配网络,基于多个编码器和解码器以这样的方式对齐,即其他编码器 - 解码器对可以在测试时间上组成,以在训练期间在训练期间没有看到明确配对样本的域或模态之间的不间断图像转换任务。我们研究了AutoEncoders,侧面信息和损失的影响,从而提高了训练有素的成对翻译模型对看不见的翻译的对准和可转换性。我们展示了我们的方法是可扩展的,可以在域的看不见的域组合之间进行着色和风格传输。我们在一个具有挑战性的跨模型设置中评估我们的系统,其中从深度图像估计语义分割,而不明确访问任何深度语义分段训练对。我们的模型基于PIX2PIX和Cypergan模型优于基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号