首页> 中文期刊> 《计算机工程与设计》 >表演驱动的二维人脸表情合成

表演驱动的二维人脸表情合成

         

摘要

In order to generate facial expression animation, a performance-driven 2-D facial expression synthesis method is presented. First, the key points on the face are located with the active appearance models, and the motion parameters of the face are extracted from these key points. Second, a face is divided into several regions, and several example expression images of the target face are acquired Finally, the interpolation parameters are acquired from the face motion parameters. The corresponding expression images of the target face are synthesized by the linear combination of the style images. The method is simple and effective, and can generate highly realistic results. The method suits the fields such as digital entertainment and video conferencing.%为了由视频进行驱动生成人脸表情动画,提出一种表演驱动的二维人脸表情合成方法.利用主动外观模型算法对人脸的关键点进行定位,并从关键点提取出人脸的运动参数;对人脸划分区域,并获取目标人脸的若干样本图像;从人脸的运动参数获取样本图像的插值系数,对样本图像进行线性组合来合成目标人脸的表情图像.该方法具有计算简单有效、真实感强的特点,可以应用于数字娱乐、视频会议等领域.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号