【24h】

Variational Bayesian GMM for Speech Recognition

机译:变分贝叶斯GMM用于语音识别

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we explore the potentialities of Variational Bayesian (VB) learning for speech recognition problems. VB methods deal in a more rigorous way with model selection and are a generalization of MAP learning. VB training for Gaussian Mixture Models is less affected than EM-ML training by over-fitting and singular solutions. We compare two types of Variational Bayesian Gaussian Mixture Models (VBGMM) with classical EM-ML GMM in a phoneme recognition task on the TIMIT database. VB learning performs better than EM-ML learning and is less affected by the initial model guess.
机译:在本文中,我们探索了变分贝叶斯(VB)学习在语音识别问题中的潜力。 VB方法以更严格的方式处理模型选择,是MAP学习的概括。通过过度拟合和奇异解,针对高斯混合模型的VB培训比EM-ML培训受的影响要小。在TIMIT数据库上的音素识别任务中,我们将两种类型的变分贝叶斯高斯混合模型(VBGMM)与经典EM-ML GMM进行了比较。 VB学习的性能优于EM-ML学习,并且受初始模型猜测的影响较小。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号