【24h】

Non-uniform Kernel Allocation Based Parsimonious HMM

机译:基于非均匀内核分配的简约HMM

获取原文
获取原文并翻译 | 示例

摘要

In conventional Gaussian mixture based Hidden Markov Model (HMM), all states are usually modeled with a uniform, fixed number of Gaussian kernels. In this paper, we propose to allocate kernels nonuniformly to construct a more parsimonious HMM. Different number of Gaussian kernels are allocated to states in a non-uniform and parsimonious way so as to optimize the Minimum Description Length (MDL) criterion, which is a combination of data likelihood and model complexity penalty. By using the likelihoods obtained in Baum-Welch training, we develop an efficient backward kernel pruning algorithm, and it is shown to be optimal under two mild assumptions. Two databases, Resource Management and Microsoft Mandarin Speech Toolbox, are used to test the proposed parsimonious modeling algorithm. The new parsimonious models improve the baseline word recognition error rate by 11.1% and 5.7%, relatively. Or at the same performance level, a 35-50% model compressions can be obtained.
机译:在传统的基于高斯混合的隐马尔可夫模型(HMM)中,通常使用统一,固定数量的高斯核对所有状态进行建模。在本文中,我们建议不均匀地分配内核,以构造更简约的HMM。将不同数量的高斯核以非均匀和简约的方式分配给状态,以优化最小描述长度(MDL)准则,该准则是数据似然性和模型复杂性代价的组合。通过使用在Baum-Welch训练中获得的可能性,我们开发了一种有效的向后核修剪算法,并且在两个温和的假设下,它被证明是最佳的。资源管理和Microsoft汉语语音工具箱这两个数据库用于测试所提出的简约建模算法。新的简约模型相对地将基线单词识别错误率提高了11.1%和5.7%。或在相同的性能水平下,可以获得35-50%的模型压缩率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号