首页> 外文会议>International Conference on Technologies for E-Learning and Digtal Entertainment >Real-Time Face Pose Tracking and Facial Expression Synthesizing for the Animation of 3D Avatar
【24h】

Real-Time Face Pose Tracking and Facial Expression Synthesizing for the Animation of 3D Avatar

机译:三维头像动画动画的实时面部姿势跟踪和面部表达

获取原文

摘要

This paper introduces a novel approach for vision-based head motion tracking and facial expression cloning to create the realistic facial animation of 3D avatar in real time. The exact head pose estimation and facial expression tracking are critical problems to be solved in developing a vision based computer animation. The proposed method consists of dynamic head pose estimation and facial expression cloning. The proposed head pose estimation technique can robustly estimate 3D head pose from a sequence of input video images. Given an initial reference template of head image and corresponding 3D head pose, full the head motion is recovered by projecting a cylindrical head model to the face image. By updating the template dynamically, it is possible to recover head pose robustly regardless of light variation and self-occlusion. In addition, to produce a realistic 3D face animation, the variation of major facial feature points is tracked by use of optical flow and retargeted to the 3D avatar. We exploit Gaussian RBF to deform the local region of 3D face model around the major feature points. During the model deformation, the clusters of the regional feature points around the major facial features are estimated and the positions of the clusters are changed according to the variation of the major feature points. From the experiments, we can prove that the proposed vision-based animation technique efficiently estimate 3D head pose and produce realistic 3D facial animation rather than using feature-based tracking method.
机译:本文介绍了一种基于视觉的头部运动跟踪和面部表达克隆的新方法,以实时创建3D具体化的现实面部动画。在开发基于视觉的计算机动画时,精确的头部姿态估计和面部表情跟踪是关键问题。所提出的方法包括动态头部姿势估计和面部表情克隆。所提出的头部姿势估计技术可以从一系列输入视频图像中鲁棒地估计3D头姿势。给定头部图像的初始参考模板和相应的3D头部姿势,通过将圆柱形头模型突出到面部图像来恢复完整的头部运动。通过动态更新模板,无论光变化和自动阻塞如何,都可以恢复头部姿势。另外,为了产生逼真的3D面部动画,通过使用光学流动跟踪主要面部特征点的变化并将其复回到3D具体化。我们利用高斯RBF在主要特征点周围变形3D面部模型的局部区域。在模型变形期间,估计主要面部特征周围的区域特征点的簇,并且根据主要特征点的变化来改变簇的位置。从实验中,我们可以证明所提出的基于视觉的动画技术有效地估计了3D头部姿势并产生了现实的3D面部动画,而不是使用基于特征的跟踪方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号