首页> 外国专利> 3D FACIAL POSE AND EXPRESSION ESTIMATING METHOD USING AAM AND ESTIMATED DEPTH INFORMATION

3D FACIAL POSE AND EXPRESSION ESTIMATING METHOD USING AAM AND ESTIMATED DEPTH INFORMATION

机译:使用AAM和估计深度信息的3D面部姿势和表情估计方法

摘要

PURPOSE: A three-dimensional facial pose and expression estimating method using an active appearance model (AAM) and estimated depth information is provided to learn an input image of a front face by using the AAM and combine the AAM with the depth information about a face estimated from input face images, thereby performing face fitting for various facial poses without depending on prior learning. CONSTITUTION: By marking a landmark on an input image, a triangular mesh is formed (S100). A two-dimensional active appearance model (AAM) face model is generated from an AAM parameter defined in a learning process by an AAM in respect to a front face among the input images in which the landmarks are marked (S200). A two-dimensional transform parameter is applied to the generated AAM face model (S300). Estimated three-dimensional face depth information is added to the AAM face model (S400,S500). A three-dimensional transform parameter is applied to the generated three-dimensional face model (S600). The two-dimensional and three-dimensional transform parameters are updated for every input image frames (S700). [Reference numerals] (S100) Triangular mesh is formed by marking landmarks on an input image showing the front and two sides of a face; (S200) Two-dimensional active appearance model (AAM) face model is generated from an AAM parameter defined in a learning process by an AAM in respect to a front face among the input images in which the landmarks are marked; (S300) Two-dimensional transform parameters are applied to the generated AAM face model and is fitted to the input image; (S400) Face depth information is estimated based on the front and two sides of the face; (S500) Three-dimensional face model is created by applying the estimated face depth information as a Z axis to the AAM face model; (S600) Three-dimension transform parameters are applied to the three-dimensional face model and the three-dimensional face model is fitted to the input image; (S700) Two- and three-dimension transform parameters are updated at every image frame of changing poses and looks of the face in a three-dimensional way by repetitively performing the stage 100 and the stage 600; (S800) Directions of the mesh values for those overlapped areas is identified and the weight of the mesh values of the overlapped areas with their directions contradicting each other is set as 0 by replacing the face texture within the triangle with an average texture
机译:目的:提供一种使用主动外观模型(AAM)和估计深度信息的三维面部姿势和表情估计方法,以通过使用AAM学习正面的输入图像,并将AAM与关于脸部的深度信息结合起来根据输入的面部图像进行估计,从而在不依赖于先前学习的情况下针对各种面部姿势进行面部拟合。组成:通过在输入图像上标记界标,可以形成三角形网格(S100)。从在学习过程中由AAM针对在其中标记有地标的输入图像中的正面定义的AAM参数,生成二维活动外观模型(AAM)面部模型(S200)。将二维变换参数应用于所生成的AAM面部模型(S300)。将估计的三维面部深度信息添加到AAM面部模型中(S400,S500)。将三维变换参数应用于所生成的三维面部模型(S600)。针对每个输入图像帧更新二维和三维变换参数(S700)。附图标记(S100)通过在表示面部的正面和两侧的输入图像上标记界标来形成三角形网格; (S200)根据在标记有地标的输入图像中的正面,由AAM在学习过程中定义的AAM参数来生成二维活动外观模型(AAM)面部模型; (S300)将二维变换参数应用于所生成的AAM面部模型,并将其拟合至输入图像; (S400)基于面部的正面和两侧来估计面部深度信息; (S500)通过将估计的面部深度信息作为Z轴应用于AAM面部模型来创建三维面部模型; (S600)将三维变换参数应用于三维人脸模型,并将三维人脸模型拟合至输入图像; (S700)通过重复执行阶段100和阶段600,以三维方式在改变姿势和脸部的每个图像帧处更新二维和三维变换参数; (S800)识别那些重叠区域的网格值的方向,并且通过用平均纹理替换三角形内的面部纹理,将重叠区域的网格值的权重以其彼此相反的方向设置为0。

著录项

  • 公开/公告号KR101333836B1

    专利类型

  • 公开/公告日2013-11-29

    原文格式PDF

  • 申请/专利权人

    申请/专利号KR20120020657

  • 发明设计人 강행봉;주명호;

    申请日2012-02-28

  • 分类号G06T17/20;

  • 国家 KR

  • 入库时间 2022-08-21 15:44:13

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号