首页> 外文期刊>IEEE Transactions on Image Processing >Face Frontalization Using an Appearance-Flow-Based Convolutional Neural Network
【24h】

Face Frontalization Using an Appearance-Flow-Based Convolutional Neural Network

机译:使用基于外观流的卷积神经网络进行人脸正面化

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Facial pose variation is one of the major factors making face recognition (FR) a challenging task. One popular solution is to convert non-frontal faces to frontal ones on which FR is performed. Rotating faces causes facial pixel value changes. Therefore, existing CNN-based methods learn to synthesize frontal faces in color space. However, this learning problem in a color space is highly non-linear, causing the synthetic frontal faces to lose fine facial textures. In this paper, we take the view that the nonfrontal-frontal pixel changes are essentially caused by geometric transformations (rotation, translation, and so on) in space. Therefore, we aim to learn the nonfrontal-frontal facial conversion in the spatial domain rather than the color domain to ease the learning task. To this end, we propose an appearance-flow-based face frontalization convolutional neural network (A3F-CNN). Specifically, A3F-CNN learns to establish the dense correspondence between the non-frontal and frontal faces. Once the correspondence is built, frontal faces are synthesized by explicitly "moving" pixels from the non-frontal one. In this way, the synthetic frontal faces can preserve fine facial textures. To improve the convergence of training, an appearance-flow-guided learning strategy is proposed. In addition, generative adversarial network loss is applied to achieve a more photorealistic face, and a face mirroring method is introduced to handle the self-occlusion problem. Extensive experiments are conducted on face synthesis and pose invariant FR. Results show that our method can synthesize more photorealistic faces than the existing methods in both the controlled and uncontrolled lighting environments. Moreover, we achieve a very competitive FR performance on the Multi-PIE, LFW and IJB-A databases.
机译:面部姿势变化是使面部识别(FR)成为一项艰巨任务的主要因素之一。一种流行的解决方案是将非正面人脸转换为执行FR的正面人脸。旋转面部会导致面部像素值更改。因此,现有的基于CNN的方法学会在色彩空间中合成正面。但是,这种在色彩空间中的学习问题是高度非线性的,从而导致合成的额叶面失去精细的面部纹理。在本文中,我们认为非前额像素变化本质上是由空间中的几何变换(旋转,平移等)引起的。因此,我们旨在在空间域而不是颜色域中学习非额叶-正面面部转换,以简化学习任务。为此,我们提出了一种基于外观流的人脸正面卷积神经网络(A3F-CNN)。具体来说,A3F-CNN学习在非正面和正面之间建立密集的对应关系。一旦建立了对应关系,就通过显式地“移动”来自非正面像素的像素来合成正面。这样,合成的额叶脸部可以保留精细的面部纹理。为了提高训练的收敛性,提出了一种由外流引导的学习策略。另外,应用生成对抗网络损失来获得更具真实感的面部,并引入了面部镜像方法来处理自遮挡问题。对人脸合成和姿势不变FR进行了广泛的实验。结果表明,在受控和非受控照明环境下,我们的方法都可以比现有方法合成更多逼真的人脸。此外,我们在Multi-PIE,LFW和IJB-A数据库上获得了非常有竞争力的FR性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号