首页> 外文会议>IEEE International Conference on Image Processing >Towards Object Shape Translation Through Unsupervised Generative Deep Models
【24h】

Towards Object Shape Translation Through Unsupervised Generative Deep Models

机译:通过无监督的生成深层模型朝着物体形状翻译

获取原文

摘要

This paper focuses on the problem of unsupervised image-to-image translation. More specifically, we aim at finding a translation network such that objects and shapes that only appear in the source domain are translated to objects and shapes only appearing in the target domain, while style color features present in the source domain remain the same. To achieve this, we use a domain-specific variational autoencoder and represent each image in its latent space representation. In a second step, we learn a translation between latent spaces of different domains using generative adversarial networks. We evaluate this framework on multiple datasets and verify the effect of multiple perceptual losses. Experiments on the MNIST and SVHN datasets show the effectiveness of the proposed translation method.
机译:本文侧重于无监督图像到图像翻译问题。更具体地说,我们的目标是找到一个翻译网络,使得只出现在源域中的对象和形状被转换为仅出现在目标域中的对象和形状,而源域中存在的样式颜色特征保持不变。为实现此目的,我们使用特定于域的变形AutoEncoder,并在其潜在空间表示中表示每个图像。在第二步中,我们使用生成的对抗网络学习不同域的潜在空间之间的翻译。我们在多个数据集上评估此框架,并验证多种感知损失的效果。 MNIST和SVHN数据集的实验表明了建议翻译方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号