【24h】

Expressive Face Animation Synthesis Based on Dynamic Mapping Method

机译:基于动态映射方法的表情表情动画合成

获取原文
获取原文并翻译 | 示例

摘要

In the paper, we present a framework of speech driven face animation system with expressions. It systematically addresses audio-visual data acquisition, expressive trajectory analysis and audio-visual mapping. Based on this framework, we learn the correlation between neutral facial deformation and expressive facial deformation with Gaussian Mixture Model (GMM). A hierarchical structure is proposed to map the acoustic parameters to lip FAPs. Then the synthesized neutral FAP streams will be extended with expressive variations according to the prosody of the input speech. The quantitative evaluation of the experimental result is encouraging and the synthesized face shows a realistic quality.
机译:在本文中,我们提出了一个带有表情的语音驱动面部动画系统框架。它系统地处理视听数据采集,表达轨迹分析和视听映射。基于此框架,我们利用高斯混合模型(GMM)了解中性面部变形与表情面部变形之间的相关性。提出了一种层次结构来将声学参数映射到嘴唇FAP。然后,根据输入语音的韵律,合成的中性FAP流将以表达形式扩展。实验结果的定量评估令人鼓舞,并且合成的面孔显示出逼真的品质。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号