首页> 外文会议>European conference on computer vision >Speech-Driven Facial Animation Using Manifold Relevance Determination
【24h】

Speech-Driven Facial Animation Using Manifold Relevance Determination

机译:使用歧管相关性确定的语音驱动的面部动画

获取原文

摘要

In this paper, a new approach to visual speech synthesis using a joint probabilistic model is introduced, namely the Gaussian process latent variable model trimmed with manifold relevance determination model, which explicitly models coarticulation. One talking head dataset is processed (LIPS dataset) by extracting visual and audio features from the sequences. The model can capture the structure of data with extremely high dimensionality. Distinguishable visual features can be inferred directly from the trained model by sampling from the discovered latent points. Statistical evaluation of inferred visual features against ground truth data is obtained and compared with the current state-of-the-art visual speech synthesis approach. The quantitative results demonstrate that the proposed approach outperforms the state-of-the-art technique.
机译:本文介绍了使用联合概率模型的可视化性合成的新方法,即用歧管相关确定模型修剪的高斯过程潜变模型,其明确地模拟了Coarticulation。通过从序列中提取视觉和音频功能来处理一个会处理的头数据集(LIPS DataSet)。该模型可以捕获具有极高维度的数据结构。可区分的视觉特征可以通过从发现的潜在点的采样直接从训练的模型推断出来。获得对地面真理数据的推断视觉特征的统计评估,并与当前的最先进的视觉合成方法进行比较。定量结果表明,所提出的方法优于最先进的技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号