首页> 外文会议>IEEE/CVF Conference on Computer Vision and Pattern Recognition >Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content
【24h】

Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content

机译:通过自适应生成↔保留图像内容实现逼真的虚拟试穿

获取原文

摘要

Image visual try-on aims at transferring a target clothes image onto a reference person, and has become a hot topic in recent years. Prior arts usually focus on preserving the character of a clothes image (e.g. texture, logo, embroidery) when warping it to arbitrary human pose. However, it remains a big challenge to generate photo-realistic try-on images when large occlusions and human poses are presented in the reference person. To address this issue, we propose a novel visual try-on network, namely Adaptive Content Generating and Preserving Network (ACGPN). In particular, ACGPN first predicts semantic layout of the reference image that will be changed after try-on (e.g.long sleeve shirt→arm, arm→jacket), and then determines whether its image content needs to be generated or preserved according to the predicted semantic layout, leading to photo-realistic try-on and rich clothes details. ACGPN generally involves three major modules. First, a semantic layout generation module utilizes semantic segmentation of the reference image to progressively predict the desired semantic layout after try-on. Second, a clothes warping module warps clothes image according to the generated semantic layout, where a second-order difference constraint is introduced to stabilize the warping process during training.Third, an inpainting module for content fusion integrates all information (e.g. reference image, semantic layout, warped clothes) to adaptively produce each semantic part of human body. In comparison to the state-of-the-art methods, ACGPN can generate photo-realistic images with much better perceptual quality and richer fine-details.
机译:图像视觉试戴旨在将目标服装图像转移到参考人身上,并且已成为近年来的热门话题。现有技术在将服装图像扭曲成任意的人体姿势时通常着重于保留服装图像的特征(例如,质地,徽标,刺绣)。但是,当参考人出现大的遮挡物和人体姿势时,要生成逼真的试戴图像仍然是一个很大的挑战。为了解决这个问题,我们提出了一种新颖的视觉试穿网络,即自适应内容生成和保存网络(ACGPN)。特别是,ACGPN首先预测试穿后将要更改的参考图像的语义布局(例如长袖衬衫→手臂,手臂→夹克),然后根据预测的内容确定是否需要生成或保留其图像内容语义布局,从而实现逼真的试穿和丰富的服装细节。 ACGPN通常涉及三个主要模块。首先,语义布局生成模块利用参考图像的语义分割来在试穿后逐步预测所需的语义布局。其次,服装变形模块根据生成的语义布局对服装图像进行变形,引入二阶差异约束以稳定训练过程中的变形过程。第三,用于内容融合的修复模块将所有信息(例如参考图像,语义)整合在一起。布局,扭曲的衣服)以适应性地产生人体的每个语义部分。与最先进的方法相比,ACGPN可以生成具有更好的感知质量和更丰富的细节的逼真的图像。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号