首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing;ICASSP 2009 >A fast, accurate approximation to log likelihood of Gaussian mixture models
【24h】

A fast, accurate approximation to log likelihood of Gaussian mixture models

机译:快速,准确的近似值以记录高斯混合模型的似然

获取原文

摘要

It has been a common practice in speech recognition and elsewhere to approximate the log likelihood of a Gaussian mixture model (GMM) with the maximum component log likelihood. While often a computational necessity, the max approximation comes at a price of inferior modeling when the Gaussian components significantly overlap. This paper shows how the approximation error can be reduced by changing component priors. In our experiments the loss in word error rate due to max approximation, albeit small, is reduced by 50-100% at no cost in computational efficiency. Furthermore, we expect acoustic models will become larger with time and increase component overlap and word error rate loss. This makes reducing the approximation error more relevant. The techniques considered do not use the original data and can easily be applied as a post-processing step to any GMM.
机译:在语音识别和其他地方,将高斯混合模型(GMM)的对数似然与最大分量对数似然近似是一种常见的做法。尽管通常需要计算,但是当高斯分量明显重叠时,最大近似值会以劣等建模为代价。本文说明了如何通过更改分量先验来减小近似误差。在我们的实验中,尽管很小,但由于最大逼近而导致的字错误率损失减少了50%至100%,而无需付出任何计算效率。此外,我们预计声学模型会随着时间的推移而变大,并增加组件重叠和字错误率损失。这使得减小近似误差更为重要。所考虑的技术不使用原始数据,可以很容易地用作任何GMM的后处理步骤。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号