【24h】

Expressive 3D face synthesis by multi-space modeling

机译:多空间建模的富有表现力的3D面部综合

获取原文

摘要

Parameterization of facial expressions is essential in generating vivid virtual avatars. This paper proposes a novel 3D parameterization method of facial images for expressive face synthesis by multi-space modeling. Given a face photograph, we first build a 3D face model by synthesizing in the facial shape space. The selected key expressions from the facial expression space are then transferred to the newly synthesized facial shape. Finally, with the purpose to produce accurate timing of 3D facial animation, blending coefficients of each frame are estimated in the individual blend coefficientspsila space with regard to motion capture data. By utilizing the advantages of multiple spaces, i.e. facial shape space, facial expression space and individual blend coefficientspsila space, our algorithm provide an effective parameterization solution of facial images. The experiments show that our method can produce promising results of expressive 3D faces of those whose face even only appeared once in the cyberspace.
机译:面部表达式的参数化对于生动虚拟化身是必不可少的。本文提出了一种通过多空间建模表达面部综合的面部图像的新型3D参数化方法。给定面部照片,我们首先通过在面部形状空间中合成来构建3D面模型。然后将来自面部表情空间的所选择的键表达转移到新合成的面部形状。最后,为了产生3D面部动画的准确定时,在各个混合系数普及空间中估计每个帧的混合系数在关于运动捕获数据的情况下。通过利用多个空间的优点,即面部形状空间,面部表情空间和个别混合系数普及空间,我们的算法提供了面部图像的有效参数化解。实验表明,我们的方法可以产生有趣的3D面孔的有趣结果,这些表达3D面孔甚至在网络空间中出现一次。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号