首页> 外文期刊>Information Sciences: An International Journal >A survey of techniques for incremental learning of HMM parameters
【24h】

A survey of techniques for incremental learning of HMM parameters

机译:HMM参数增量学习技术概述

获取原文
获取原文并翻译 | 示例
       

摘要

The performance of Hidden Markov Models (HMMs) targeted for complex real-world applications are often degraded because they are designed a priori using limited training data and prior knowledge, and because the classification environment changes during operations. Incremental learning of new data sequences allows to adapt HMM parameters as new data becomes available, without having to retrain from the start on all accumulated training data. This paper presents a survey of techniques found in literature that are suitable for incremental learning of HMM parameters. These techniques are classified according to the objective function, optimization technique and target application, involving block-wise and symbol-wise learning of parameters. Convergence properties of these techniques are presented along with an analysis of time and memory complexity. In addition, the challenges faced when these techniques are applied to incremental learning is assessed for scenarios in which the new training data is limited and abundant. While the convergence rate and resource requirements are critical factors when incremental learning is performed through one pass over abundant stream of data, effective stopping criteria and management of validation sets are important when learning is performed through several iterations over limited data. In both cases managing the learning rate to integrate pre-existing knowledge and new data is crucial for maintaining a high level of performance. Finally, this paper underscores the need for empirical benchmarking studies among techniques presented in literature, and proposes several evaluation criteria based on non-parametric statistical testing to facilitate the selection of techniques given a particular application domain.
机译:针对复杂现实应用的隐马尔可夫模型(HMM)的性能通常会降低,因为它们是使用有限的训练数据和先验知识进行先验设计的,并且由于分类环境在操作过程中会发生变化。新数据序列的增量学习允许在新数据可用时适应HMM参数,而不必从一开始就对所有累积的训练数据进行重新训练。本文介绍了对文献中发现的适用于增量学习HMM参数的技术的调查。这些技术根据目标函数,优化技术和目标应用进行分类,涉及参数的逐块和逐个符号学习。提出了这些技术的收敛特性以及对时间和内存复杂性的分析。此外,针对新培训数据有限且丰富的情况,评估了将这些技术应用于增量学习时所面临的挑战。当通过一次遍历丰富的数据流执行增量学习时,收敛速度和资源需求是至关重要的因素,而通过对有限数据进行多次迭代进行学习时,有效的停止准则和验证集的管理很重要。在这两种情况下,管理学习率以整合现有知识和新数据对于维持高水平的性能至关重要。最后,本文强调了文献中提出的对技术进行经验基准研究的必要性,并提出了一些基于非参数统计测试的评估标准,以方便在特定应用领域中选择技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号