首页> 外文期刊>Circuits and Systems for Video Technology, IEEE Transactions on >Animating Lip-Sync Characters With Dominated Animeme Models
【24h】

Animating Lip-Sync Characters With Dominated Animeme Models

机译:使用主导的Animeme模型对口型同步角色进行动画处理

获取原文
获取原文并翻译 | 示例
           

摘要

Character speech animation is traditionally considered as important but tedious work, especially when taking lip synchronization (lip-sync) into consideration. Although there are some methods proposed to ease the burden on artists to create facial and speech animation, almost none is fast and efficient. In this paper, we introduce a framework for synthesizing lip-sync character speech animation in real time from a given speech sequence and its corresponding texts, starting from training dominated animeme models (DAMs) for each kind of phoneme by learning the character's animation control signal through an expectation—maximization (EM)-style optimization approach. The DAMs are further decomposed to polynomial-fitted animeme models and corresponding dominance functions while taking coarticulation into account. Finally, given a novel speech sequence and its corresponding texts, the animation control signal of the character can be synthesized in real time with the trained DAMs. The synthesized lip-sync animation can even preserve exaggerated characteristics of the character's facial geometry. Moreover, since our method can perform in real time, it can be used for many applications, such as lip-sync animation prototyping, multilingual animation reproduction, avatar speech, and mass animation production. Furthermore, the synthesized animation control signal can be imported into 3-D packages for further adjustment, so our method can be easily integrated into the existing production pipeline.
机译:传统上,角色语音动画被认为是重要而繁琐的工作,尤其是在考虑到嘴唇同步(lip-sync)时。尽管提出了一些减轻艺术家创作面部和语音动画负担的方法,但几乎没有一种方法是快速有效的。在本文中,我们介绍了一个框架,该框架从给定的语音序列及其对应的文本实时合成口型同步的人物语音动画,从通过学习角色的动画控制信号为每种音素训练主导的动漫模型(DAM)开始通过期望-最大化(EM)风格的优化方法。在考虑协同发音的同时,将DAM进一步分解为多项式拟合的动漫模型和相应的优势函数。最后,给定新颖的语音序列及其相应的文本,可以使用训练有素的DAM实时合成角色的动画控制信号。合成的口型同步动画甚至可以保留角色面部几何形状的夸张特征。而且,由于我们的方法可以实时执行,因此可以用于许多应用,例如口型同步动画原型制作,多语言动画再现,虚拟形象语音和大量动画制作。此外,可以将合成的动画控制信号导入3-D包中以进行进一步调整,因此我们的方法可以轻松地集成到现有的生产管道中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号