首页> 外文期刊>Computer Speech and Language >Efficient training of high-order hidden Markov models using first-order representations
【24h】

Efficient training of high-order hidden Markov models using first-order representations

机译:使用一阶表示对高阶隐马尔可夫模型进行有效训练

获取原文
获取原文并翻译 | 示例
           

摘要

We detail an algorithm (ORED) that transforms any higher-order hidden Markov model (HMM) to an equivalent first-order HMM. This makes it possible to process higher-order HMMs with standard techniques applicable to first-order models. Based on this equivalence, a fast incremental algorithm (FIT) is developed for training higher-order HMMs from lower-order models, thereby avoiding the training of redundant parameters. We also show that the FIT algorithm results in much faster training and better generalization compared to conventional high-order HMM approaches. This makes training of high-order HMMs practical for many applications.
机译:我们详细介绍了一种算法(ORED),该算法可将任何高阶隐马尔可夫模型(HMM)转换为等效的一阶HMM。这样就可以使用适用于一阶模型的标准技术来处理高阶HMM。基于这种等效性,开发了一种快速增量算法(FIT),用于从低阶模型训练高阶HMM,从而避免训练冗余参数。我们还表明,与传统的高阶HMM方法相比,FIT算法可导致更快的训练速度和更好的泛化能力。这使得高阶HMM的训练对于许多应用而言都是实用的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号