首页> 外文期刊>Information Forensics and Security, IEEE Transactions on >3-D Generic Elastic Models for Fast and Texture Preserving 2-D Novel Pose Synthesis
【24h】

3-D Generic Elastic Models for Fast and Texture Preserving 2-D Novel Pose Synthesis

机译:快速且纹理保留的2-D新颖姿势合成的3-D通用弹性模型

获取原文
获取原文并翻译 | 示例
           

摘要

This paper provides an in-depth analysis on face shape alignment for pose insensitive face recognition. The dissimilarity between two face images can be modeled as the difference in intensity between these two images, obtained by warping these faces onto the same shape. In order to achieve this, we must first align both face images independently to obtain a sparse 2-D shape representation. We achieve this by using a Combination of ASMs and AAMs (CASAAMs). We then exchange these two shapes and obtain new intensity (texture) faces based on these exchanged shapes. This allows us to align the two faces with increased pixel-level correspondence while simultaneously achieving a certain degree of pose correction. In order to account for large pose variation, it becomes necessary to model the underlying 3-D face structure for the synthesis of novel 2-D poses. However, in many real-world scenarios, only a single image of the subject is provided and acquisition of the 3-D model is not always feasible. To tackle this common real-world scenario, we propose a novel approach for modeling faces, called 3D Generic Elastic Model (3D-GEM), which can be deformed from a single 2-D image. Our analysis shows that 3-D depth information of human faces does not dramatically change across people, indicating that precise depth information of a person is not needed to generate useful novel 2-D poses. This is a significant discovery that makes our method feasible. We thus demonstrate that our 3-D face model can be efficiently produced by using a generic depth model which can be elastically deformed based on input facial features. This face model can then be rotated in 3-D in order to synthesize any arbitrary 2-D facial pose. Experimental results show that 3-D faces modeled by our proposed work effectively handle large 3-D pose changes in face alignment and they can be used for achieving pose tolerant face recognition. We also provide comparative results of face synthesis obtained by an actual -n-D face scanner and our approach, showing our proposed modeling approach is both effective and efficient.
机译:本文针对姿势不敏感的人脸识别对人脸形状对齐进行了深入分析。可以将两个面部图像之间的不相似性建模为这两个图像之间的强度差异,这是通过将这些面部变形为相同形状而获得的。为了实现这一点,我们必须首先独立对齐两个面部图像以获得稀疏的二维形状表示。我们通过结合使用ASM和AAM(CASAAM)来实现这一目标。然后,我们交换这两个形状,并基于这些交换的形状获得新的强度(纹理)面。这使我们能够以增加的像素级对应度对齐两个面部,同时实现一定程度的姿势校正。为了解决较大的姿势变化,有必要对基本的3-D人脸结构进行建模以合成新颖的2-D姿势。但是,在许多现实情况下,仅提供对象的单个图像,并且3D模型的获取并非总是可行的。为了解决这种常见的现实情况,我们提出了一种新颖的人脸建模方法,称为3D通用弹性模型(3D-GEM),可以从单个2D图像变形。我们的分析表明,人脸的3-D深度信息不会在整个人中发生巨大变化,这表明不需要人的精确深度信息即可生成有用的新颖2-D姿势。这是一个重大发现,使我们的方法可行。因此,我们证明了可以使用通用深度模型有效地生成3D面部模型,该模型可以根据输入的面部特征进行弹性变形。然后可以以3-D旋转此脸部模型,以便合成任意的2-D脸部姿势。实验结果表明,通过我们提出的工作建模的3-D人脸可以有效地处理人脸对齐中较大的3-D姿势变化,并且可以用于实现姿势容忍的人脸识别。我们还提供了通过实际的-n-D人脸扫描仪和我们的方法获得的人脸合成的比较结果,表明我们提出的建模方法既有效又高效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号