首页> 外文会议> >HMM-based transmodal mapping from audio speech to talking faces
【24h】

HMM-based transmodal mapping from audio speech to talking faces

机译:基于HMM的从语音到说话人脸的跨模式映射

获取原文

摘要

Describes a transmodal mapping from audio speech to talking faces based on hidden Markov models (HMMs). If face movements are synthesized well enough for natural communication, a lot of benefits will be brought to human-machine communication. This paper describes an HMM-based speech-driven lip movement synthesis. The paper also describes its improvement by audio-visual joint estimation and its extension to talking face generation. The results of evaluation experiments show that the proposed method generates natural and accurate talking faces from audio speech inputs.
机译:描述基于隐马尔可夫模型(HMM)的从语音到讲话脸的跨模式映射。如果将脸部运动合成得足够好以进行自然交流,则人机交流将带来很多好处。本文介绍了一种基于HMM的语音驱动的唇部运动合成。本文还描述了通过视听联合估计对其的改进及其对会说话的脸部生成的扩展。评估实验结果表明,该方法能够从语音输入中生成自然准确的说话人脸。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号