Unsupervised Image-to-Image Translation achieves spectacularly advanceddevelopments nowadays. However, recent approaches mainly focus on one modelwith two domains, which may face heavy burdens with large cost of $O(n^2)$training time and model parameters, under such a requirement that $n$ domainsare freely transferred to each other in a general setting. To address thisproblem, we propose a novel and unified framework named Domain-Bank, whichconsists of a global shared auto-encoder and $n$ domain-specificencoders/decoders, assuming that a universal shared-latent sapce can beprojected. Thus, we yield $O(n)$ complexity in model parameters along with ahuge reduction of the time budgets. Besides the high efficiency, we show thecomparable (or even better) image translation results over state-of-the-arts onvarious challenging unsupervised image translation tasks, including face imagetranslation, fashion-clothes translation and painting style translation. Wealso apply the proposed framework to domain adaptation and achievestate-of-the-art performance on digit benchmark datasets. Further, thanks tothe explicit representation of the domain-specific decoders as well as theuniversal shared-latent space, it also enables us to conduct incrementallearning to add a new domain encoder/decoder. Linear combination of differentdomains' representations is also obtained by fusing the corresponding decoders.
展开▼
机译:无监督图像到影像翻译时下达到壮观advanceddevelopments。然而,近来的方案主要集中于一个modelwith两个域,其可以面(N ^ 2)$训练时间和模型参数,根据该$ N $ domainsare自由转移到彼此以这样的要求负担重为$ O量大的成本一般的设置。要地址thisproblem,我们提出了一个新颖的和统一的框架,名为domain-银行,全局共享的自动编码器和$ n $的域specificencoders /解码器whichconsists,假设通用共享潜在sapce可以beprojected。因此,我们得到$ O(N)$模型参数的复杂性ahuge减少时间预算一起。除了高效率,我们展示了thecomparable(或更好)图像翻译结果国家的的艺术onvarious挑战无监督图像翻译任务,包括人脸imagetranslation,时尚服装翻译和绘画风格转换。 Wealso申请所提出的框架领域适应性和achievestate的最先进的性能数字上的基准数据集。此外,由于tothe域特定解码器的解析表达式以及theuniversal共享 - 潜在空间,它也使我们能够进行incrementallearning添加一个新的域编码器/解码器。 differentdomains'表示的线性组合还通过熔合对应的解码器获得的。
展开▼