【24h】

Multi-poses Face Frontalization based on Pose Weighted GAN

机译:基于姿势加权GaN的多姿势面部正面化

获取原文

摘要

In many scenes, the frontal face image is the only criterion for judging the identity of a person. However, it is difficult to collect a standard frontal image in an uncontrolled environment. To get a clear frontal image from a large variety of profile images, there are many studies on face frontalization. Some researches need three-dimension face data or prior pose information while others do not take into account the effect of pose information. And there are restrictions on the number of poses of input face images. Because of the ill-consideration of pose information, the authenticity of generated frontal face images is not high when we input multi-poses profile images. To resolve this problem, this paper proposes a Pose-weighted Generative Adversarial Network (PWGAN), which adds a pre-trained pose certification module to learn face pose information. For the single input image, PWGAN combines fusion features with pose features. And for multiple input images, PWGAN uses pose information to dynamic distribute weights when fusing feature maps. PWGAN makes full use of pose information to make the generation network learn more about facial features and get better-generating effect. Through contrastive experiments, this paper proves that PWGAN has a better effect on multi-poses face frontalization than the above methods.
机译:在许多场景中,正面图像是判断人身份的唯一标准。然而,很难在不受控制的环境中收集标准的正面图像。要从各种型材图像中获得清晰的正面形象,有很多关于面部正面化的研究。一些研究需要三维面部数据或先前的姿势信息,而其他研究则没有考虑到姿势信息的效果。并且有限制输入面部图像的姿势数量。由于对姿势信息的不适应,当我们输入多姿势简档图像时,产生的正面图像的真实性不高。为了解决这个问题,本文提出了一种姿势加权生成的对抗网络(PWGAN),它增加了预先训练的姿势认证模块来学习面部姿势信息。对于单个输入图像,PWGAN将融合功能与姿势特征组合。并且对于多个输入图像,PWGAN使用姿势信息在定影特征映射时使用姿势信息来动态分配权重。 PWGAN充分利用姿势信息,使得生成网络了解有关面部特征的更多信息,并获得更好的产生效果。通过对比实验,本文证明了PWGAN对多姿势的效果比上述方法更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号