...
首页> 外文期刊>Computer Animation and Virtual Worlds >Stylized synthesis of facial speech motions
【24h】

Stylized synthesis of facial speech motions

机译:面部语音动作的程式化合成

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Stylized synthesis of facial speech motions is central to facial animation. Most synthesis algorithms put emphasis on the reasonable concatenation of captured motion segments. The dynamic modeling of speech units, e.g. visemes and visyllables (the visual appearance of a syllable), has not drawn much attention. In this paper, we address the fundamental issues regarding the stylized dynamic modeling of visyllables. The decomposable generalized model is learnt for the stylized motion synthesis. The visyllable modeling includes two parts: (1) A dynamic model for each kind of visyllable that is learnt based on a Gaussian Process Dynamical Model; (2) A multilinear model based unified mapping between the high dimensional observation space and low dimensional latent space. The dynamic visyllable model embeds the high dimensional motion data, and constructs the dynamic mapping in the latent space simultaneously. To generalize the visyllable model from several instances, the mapping coefficient matrices are assembled to a tensor, which is decomposed into independent modes, e.g. identity and uttering styles. Therefore, with the linear combination of components in each mode, the novel stylized motions can be synthesized.
机译:面部语音动作的程式化合成对于面部动画至关重要。大多数合成算法都强调捕获运动段的合理级联。语音单元的动态建模,例如音位和音节(音节的视觉外观)并没有引起太多关注。在本文中,我们解决了有关可见音的程式化动态建模的基本问题。学习可分解的广义模型以进行风格化的运动合成。可见音建模包括两个部分:(1)基于高斯过程动力学模型学习的每种可见音的动态模型; (2)基于多线性模型的高维观测空间和低维潜在空间之间的统一映射。动态可见模型嵌入了高维运动数据,并同时在潜在空间中构造了动态映射。为了从多个实例中概括可见的模型,将映射系数矩阵组装到张量,将其分解成独立的模式,例如身份和话语风格。因此,通过每种模式下的线性组合,可以合成新颖的程式化运动。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号