首页> 外文会议> >Real-Time Face Pose Tracking and Facial Expression Synthesizing for the Animation of 3D Avatar
【24h】

Real-Time Face Pose Tracking and Facial Expression Synthesizing for the Animation of 3D Avatar

机译:3D头像动画的实时面部姿势跟踪和面部表情合成

获取原文
获取原文并翻译 | 示例

摘要

This paper introduces a novel approach for vision-based head motion tracking and facial expression cloning to create the realistic facial animation of 3D avatar in real time. The exact head pose estimation and facial expression tracking are critical problems to be solved in developing a vision based computer animation. The proposed method consists of dynamic head pose estimation and facial expression cloning. The proposed head pose estimation technique can robustly estimate 3D head pose from a sequence of input video images. Given an initial reference template of head image and corresponding 3D head pose, full the head motion is recovered by projecting a cylindrical head model to the face image. By updating the template dynamically, it is possible to recover head pose robustly regardless of light variation and self-occlusion. In addition, to produce a realistic 3D face animation, the variation of major facial feature points is tracked by use of optical flow and retargeted to the 3D avatar. We exploit Gaussian RBF to deform the local region of 3D face model around the major feature points. During the model deformation, the clusters of the regional feature points around the major facial features are estimated and the positions of the clusters are changed according to the variation of the major feature points. From the experiments, we can prove that the proposed vision-based animation technique efficiently estimate 3D head pose and produce realistic 3D facial animation rather than using feature-based tracking method.
机译:本文介绍了一种基于视觉的头部运动跟踪和面部表情克隆的新方法,以实时创建逼真的3D头像面部动画。精确的头部姿势估计和面部表情跟踪是开发基于视觉的计算机动画时要解决的关键问题。所提出的方法包括动态头部姿态估计和面部表情克隆。所提出的头部姿势估计技术可以从一系列输入视频图像中稳健地估计3D头部姿势。给定头部图像的初始参考模板和相应的3D头部姿势,可以通过将圆柱形头部模型投影到面部图像来恢复整个头部运动。通过动态更新模板,可以稳定地恢复头部姿势,而与光线变化和自我遮挡无关。另外,为了产生逼真的3D面部动画,可以使用光流跟踪主要面部特征点的变化并将其重新定位到3D头像。我们利用高斯RBF对主要特征点周围的3D人脸模型局部区域进行变形。在模型变形期间,估计主要面部特征周围的区域特征点的聚类,并且根据主要特征点的变化来改变聚类的位置。从实验中,我们可以证明所提出的基于视觉的动画技术可以有效地估计3D头部姿势并产生逼真的3D面部动画,而不是使用基于特征的跟踪方法。

著录项

  • 来源
    《》|2007年|P.191-201|共11页
  • 会议地点 Hong Kong(CN)
  • 作者单位

    Department of Computer Science, Kyonggi University, Yui-Dong Suwon, Korea;

    HanSoft, Prime Center 546-4 Guui-Dong. Kwangjin-Gu, Seoul, Korea;

    Humintec, Wonchun-Dong, Suwon, Korea;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 计算机软件;
  • 关键词

  • 入库时间 2022-08-26 14:20:46

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号