首页> 外文学位 >Adaptive systems for hidden Markov model-based pattern recognition systems.
【24h】

Adaptive systems for hidden Markov model-based pattern recognition systems.

机译:用于基于隐马尔可夫模型的模式识别系统的自适应系统。

获取原文
获取原文并翻译 | 示例

摘要

This thesis focuses on the design of adaptive systems (AS) for dealing with complex pattern recognition problems. Pattern recognition systems usually rely on static knowledge to define a configuration to be used during their entire lifespan. However, some systems need to adapt to knowledge that may not have been available in the design phase. For this reason, AS are designed to tailor a baseline pattern recognition system as required, and in an automated fashion, in both the learning and generalization phases. These AS are defined here, using hidden Markov model (HMM)-based classifiers as a case study.;We first evaluate incremental learning algorithms for the estimation of HMM parameters. The main goal is to find incremental learning algorithms that perform as well as the traditional batch learning techniques, but incorporate the advantages of incremental learning for designing complex pattern recognition systems. Experiments on handwritten characters have shown that a proposed variant of the Ensemble Training algorithm, which employs ensembles of HMMs, can lead to very promising results. Furthermore, the use of a validation dataset demonstrates that it is possible to achieve better performances than those of batch learning.;We then propose a new approach for the dynamic selection of ensembles of classifiers. Based on the concept called "multistage organizations", the main objective of which is to define a multi-layer fusion function that adapts to individual recognition problems, we propose dynamic multistage organization (DMO), which defines the best multistage structure for each test sample. By extending Dos Santos et al's approach, we propose two implementations for DMO, namely DSAm and DSAc. DSAm considers a set of dynamic selection functions to generalize a DMO structure, and DSAc uses contextual information, represented by the output profiles computed from the validation dataset. The experimental evaluation, considering both small and large datasets, demonstrates that DSAc outperforms DSAm on most problems. This shows that the use of contextual information can result in better performance than other methods. The performance of DSAc can also be enhanced in incremental learning. However, the most important observation, supported by additional experiments, is that dynamic selection is generally preferred over static approaches when the recognition problem presents a high level of uncertainty.;Finally, we propose the LoGID (Local and Global Incremental Learning for Dynamic Selection) framework, the main goal of which is to adapt hidden Markov model-based pattern recognition systems in both the learning and generalization phases. Given that the baseline system is composed of a pool of base classifiers, adaptation during generalization is conducted by dynamically selecting the best members of this pool to recognize each test sample. Dynam selection is performed by the proposed K-nearest output profiles algorithm, while adaptation during learning consists of gradually updating the knowledge embedded in the base classifiers by processing previously unobserved data. This phase employs two types of ncremental learning: local and global. Local incremental learning involves updating the pool of base classifiers by adding new members to this set. These new members are created with the Learn++ algorithm. In contrast, global incremental learning consists of updating the set of output profiles used during generalization. The proposed framework has been evaluated on a diversified set of databases. The results indicate that LoGID is promising. In most databases, the recognition rates achieved by the proposed method are higher than those achieved by other state-of-the-art approaches, such as batch learning. Furthermore, the simulated incremental learning setting demonstrates that LoGID can effectively improve the performance of systems created with small training sets as more data are observed over time.
机译:本文的重点是用于处理复杂模式识别问题的自适应系统(AS)的设计。模式识别系统通常依靠静态知识来定义要在其整个生命周期中使用的配置。但是,某些系统需要适应设计阶段可能没有的知识。由于这个原因,AS被设计为在学习和泛化阶段中按要求以自动化方式定制基线模式识别系统。这些AS是在这里定义的,使用基于隐马尔可夫模型(HMM)的分类器作为案例研究。我们首先评估用于HMM参数估计的增量学习算法。主要目标是找到性能和传统批处理学习技术一样好的增量学习算法,但是要结合增量学习的优势来设计复杂的模式识别系统。对手写字符的实验表明,采用HMM集成的“集成训练”算法的拟议变体可以带来非常有希望的结果。此外,验证数据集的使用证明了有可能实现比批处理学习更好的性能。;然后,我们提出了一种动态选择分类器集合的新方法。基于称为“多阶段组织”的概念,其主要目标是定义适合个人识别问题的多层融合功能,我们提出了动态多阶段组织(DMO),该方法为每个测试样品定义了最佳的多阶段结构。通过扩展Dos Santos等人的方法,我们提出了DMO的两种实现方式,即DSAm和DSAc。 DSAm考虑了一组动态选择功能以概括DMO结构,DSAc使用上下文信息,该上下文信息由从验证数据集计算得出的输出配置文件表示。考虑到小型和大型数据集的实验评估表明,在大多数问题上,DSAc优于DSAm。这表明使用上下文信息可以比其他方法带来更好的性能。在增量学习中,DSAc的性能也可以得到增强。然而,最重要的观察结果是得到了更多实验的支持,这是当识别问题存在高度不确定性时,动态选择通常比静态方法更可取;最后,我们提出了LoGID(用于动态选择的局部和全局增量学习)框架,其主要目标是在学习和泛化阶段都采用基于隐马尔可夫模型的模式识别系统。假设基线系统由基本分类器池组成,则通过动态选择该池的最佳成员以识别每个测试样本来进行泛化期间的自适应。动态力选择是通过提出的K最近输出配置文件算法执行的,而学习过程中的自适应则包括通过处理先前未观察到的数据来逐步更新嵌入在基本分类器中的知识。此阶段采用两种增量学习方式:本地学习和全局学习。本地增量学习涉及通过向该集合添加新成员来更新基本分类器池。这些新成员是使用Learn ++算法创建的。相反,全局增量学习包括更新在泛化期间使用的一组输出配置文件。所提议的框架已在各种数据库上进行了评估。结果表明LoGID是有希望的。在大多数数据库中,通过提出的方法实现的识别率高于通过其他最新方法(例如批处理学习)获得的识别率。此外,模拟的增量学习设置表明,随着时间的推移观察到更多的数据,LoGID可以有效地提高使用小型训练集创建的系统的性能。

著录项

  • 作者

    Cavalin, Paulo Rodrigo.;

  • 作者单位

    Ecole de Technologie Superieure (Canada).;

  • 授予单位 Ecole de Technologie Superieure (Canada).;
  • 学科 Artificial Intelligence.;Computer Science.
  • 学位 D.Eng.
  • 年度 2011
  • 页码 156 p.
  • 总页数 156
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号