首页> 外文期刊>IEEE Transactions on Speech and Audio Proceeding >Estimation of articulatory movements from speech acoustics using an HMM-based speech production model
【24h】

Estimation of articulatory movements from speech acoustics using an HMM-based speech production model

机译:使用基于HMM的语音产生模型估计语音声学中的发音运动

获取原文
获取原文并翻译 | 示例
       

摘要

We present a method that determines articulatory movements from speech acoustics using a Hidden Markov Model (HMM)-based speech production model. The model statistically generates speech spectrum and articulatory parameters from a given phonemic string. It consists of HMMs of articulatory parameters for each phoneme and an articulatory-to-acoustic mapping for each HMM state. For a given speech spectrum, maximum a posteriori estimation of the articulatory parameters of the statistical model is presented. The performance on sentences was evaluated by comparing the estimated articulatory parameters with the observed parameters. The average RMS errors of the estimated articulatory parameters were 1.50 mm from the speech acoustics and the phonemic information in an utterance and 1.73 mm from the speech acoustics only.
机译:我们提出了一种方法,该方法使用基于隐马尔可夫模型(HMM)的语音产生模型从语音声学中确定发音运动。该模型从给定的音位字符串统计生成语音频谱和发音参数。它由每个音素的发音参数的HMM和每个HMM状态的发音到声音映射组成。对于给定的语音频谱,提出了统计模型的发音参数的最大后验估计。通过比较估计的发音参数与观察到的参数来评估句子的表现。估计的发音参数的平均RMS误差与语音声学和语音信息的距离为1.50毫米,仅与语音声学距离为1.73毫米。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号