首页> 外文会议> >Comparison of Large Margin Training to Other Discriminative Methods for Phonetic Recognition by Hidden Markov Models
【24h】

Comparison of Large Margin Training to Other Discriminative Methods for Phonetic Recognition by Hidden Markov Models

机译:隐马尔可夫模型对大幅度训练与其他判别方法进行语音识别的比较

获取原文

摘要

In this paper we compare three frameworks for discriminative training of continuous-density hidden Markov models (CD-HMMs). Specifically, we compare two popular frameworks, based on conditional maximum likelihood (CML) and minimum classification error (MCE), to a new framework based on margin maximization. Unlike CML and MCE, our formulation of large margin training explicitly penalizes incorrect decodings by an amount proportional to the number of mislabeled hidden states. It also leads to a convex optimization over the parameter space of CD-HMMs, thus avoiding the problem of spurious local minima. We used discriminatively trained CD-HMMs from all three frameworks to build phonetic recognizers on the TIMIT speech corpus. The different recognizers employed exactly the same acoustic front end and hidden state space, thus enabling us to isolate the effect of different cost functions, parameterizations, and numerical optimizations. Experimentally, we find that our framework for large margin training yields significantly lower error rates than both CML and MCE training
机译:在本文中,我们比较了用于鉴别训练连续密度隐马尔可夫模型(CD-HMM)的三个框架。具体来说,我们将基于条件最大似然(CML)和最小分类误差(MCE)的两个流行框架与基于边际最大化的新框架进行了比较。与CML和MCE不同,我们对大余量训练的表述明确地惩罚了错误的解码,错误的解码的数量与错误标记的隐藏状态的数量成比例。这还导致对CD-HMM的参数空间进行凸优化,从而避免了伪局部极小值的问题。我们使用了来自所有三个框架的经过判别训练的CD-HMM,以在TIMIT语音语料库上建立语音识别器。不同的识别器采用完全相同的声学前端和隐藏状态空间,因此使我们能够隔离不同成本函数,参数化和数值优化的影响。通过实验,我们发现我们的大幅度训练框架比CML和MCE训练产生的错误率明显更低

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号