首页> 外文会议>International Conference on Computer Vision >Attribute Manipulation Generative Adversarial Networks for Fashion Images
【24h】

Attribute Manipulation Generative Adversarial Networks for Fashion Images

机译:时尚图像的属性操纵生成对抗网络

获取原文

摘要

Recent advances in Generative Adversarial Networks (GANs) have made it possible to conduct multi-domain image-to-image translation using a single generative network. While recent methods such as Ganimation and SaGAN are able to conduct translations on attribute-relevant regions using attention, they do not perform well when the number of attributes increases as the training of attention masks mostly rely on classification losses. To address this and other limitations, we introduce Attribute Manipulation Generative Adversarial Networks (AMGAN) for fashion images. While AMGAN's generator network uses class activation maps (CAMs) to empower its attention mechanism, it also exploits perceptual losses by assigning reference (target) images based on attribute similarities. AMGAN incorporates an additional discriminator network that focuses on attribute-relevant regions to detect unrealistic translations. Additionally, AMGAN can be controlled to perform attribute manipulations on specific regions such as the sleeve or torso regions. Experiments show that AMGAN outperforms state-of-the-art methods using traditional evaluation metrics as well as an alternative one that is based on image retrieval.
机译:生成对抗网络(GAN)的最新进展使得使用单个生成网络进行多域图像到图像的转换成为可能。尽管最近的方法(例如Ganimation和SaGAN)能够使用注意力在属性相关的区域进行翻译,但是当属性数量增加时,它们的效果将不佳,因为注意掩码的训练主要取决于分类损失。为了解决此限制和其他限制,我们引入了用于时尚图像的属性操纵生成对抗网络(AMGAN)。尽管AMGAN的生成器网络使用类激活图(CAM)来增强其注意力机制,但它还通过基于属性相似性分配参考(目标)图像来利用感知损失。 AMGAN合并了一个附加的鉴别器网络,该网络专注于与属性相关的区域,以检测不切实际的翻译。此外,可以控制AMGAN对特定区域(如袖子或躯干区域)执行属性操作。实验表明,AMGAN使用传统的评估指标以及基于图像检索的替代方法优于最新方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号