首页> 外文期刊>IEEE Transactions on Neural Networks >Real-time speech-driven face animation with expressions using neural networks
【24h】

Real-time speech-driven face animation with expressions using neural networks

机译:使用神经网络的表情实时语音驱动的面部动画

获取原文
获取原文并翻译 | 示例

摘要

A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.
机译:实时语音驱动的合成人脸可在分布式协作环境中提供有效的多模式通信界面。非语言手势(例如面部表情)对于人类交流很重要,并且语音驱动的面部动画系统应考虑这些手势。在本文中,我们提出了一个框架,该框架系统地解决了人脸变形建模,自动人脸运动分析以及使用神经网络进行表情表达的实时语音驱动人脸动画的问题。基于此框架,我们学习了面部变形的定量视觉表示,称为运动单位(MU)。可以通过以MU参数(MUP)加权的MU的线性组合来近似面部变形。我们开发了一种基于MU的面部运动跟踪算法,该算法用于收集视听培训数据库。然后,我们通过使用收集的视听训练数据库训练一组神经网络来构建实时音频到MUP的映射。映射的定量评估表明了该方法的有效性。使用提出的方法,我们为iFACE系统开发了带有表达式的实时语音驱动面部动画的功能。实验结果表明,就iFACE系统对双峰人类情感感知的影响的有效性而言,iFACE系统的合成表达谈话面孔可与真实面孔相媲美。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号