首页> 外文学位 >Generalized multi-stream hidden Markov models.
【24h】

Generalized multi-stream hidden Markov models.

机译:广义多流隐马尔可夫模型。

获取原文
获取原文并翻译 | 示例

摘要

For complex classification systems, data is usually gathered from multiple sources of information that have varying degree of reliability. In fact, assuming that the different sources have the same relevance in describing all the data might lead to an erroneous behavior. The classification error accumulates and can be more severe for temporal data where each sample is represented by a sequence of observations. Thus, there is a compelling evidence that learning algorithms should include a relevance weight for each source of information (stream) as a parameter that needs to be learned.;For the continuous HMM, we introduce a. new approach that integrates the stream relevance weights in the objective function. Our approach is based on the linearization of the probability density function. Two variations are proposed: the mixture and state level variations. As in the discrete case, we generalize the continuous Baum-Welch learning algorithm to accommodate these changes, and we derive the necessary conditions for updating the model parameters. We also generalize the MCE learning algorithm to derive the necessary conditions for the model parameters' update.;The proposed discrete and continuous HMM are tested on synthetic data sets. They are also validated on various applications including Australian Sign Language, audio classification, face classification, and more extensively on the problem of landmine detection using ground penetrating radar data. For all applications, we show that considerable improvement can be achieved compared to the baseline HMM and the existing multi-stream HMM algorithms.;In this dissertation, we assumed that the multi-stream temporal data is generated by independent and synchronous streams. Using this assumption, we develop, implement, and test multi- stream continuous and discrete hidden Markov model (HMM) algorithms. For the discrete case, we propose two new approaches to generalize the baseline discrete HMM. The first one combines unsupervised learning, feature discrimination, standard discrete HMMs and weighted distances to learn the codebook with feature-dependent weights for each symbol. The second approach consists of modifying the HMM structure to include stream relevance weights, generalizing the standard discrete Baum-Welch learning algorithm, and deriving the necessary conditions to optimize all model parameters simultaneously. We also generalize the minimum classification error (MCE) discriminative training algorithm to include stream relevance weights.
机译:对于复杂的分类系统,数据通常是从具有不同程度可靠性的多个信息源中收集的。实际上,假设不同的来源在描述所有数据时具有相同的相关性,可能会导致错误的行为。分类错误会累积起来,对于时间数据会更为严重,在这种情况下,每个样本都由一系列观测值表示。因此,有令人信服的证据表明,学习算法应包括每个信息源(流)的相关权重作为需要学习的参数。;对于连续HMM,我们引入a。将流相关权重集成到目标函数中的新方法。我们的方法基于概率密度函数的线性化。提出了两个变体:混合和状态级别变体。与离散情况一样,我们推广了连续的Baum-Welch学习算法来适应这些变化,并得出了更新模型参数的必要条件。我们还推广了MCE学习算法,以得出更新模型参数的必要条件。;在综合数据集上测试了所提出的离散和连续HMM。它们还在各种应用中得到了验证,包括澳大利亚手语,音频分类,面部分类,并且在使用探地雷达数据检测地雷的问题上得到了更广泛的验证。对于所有应用,我们表明与基线HMM和现有的多流HMM算法相比,可以实现相当大的改进。本文假设多流时间数据是由独立和同步流生成的。使用此假设,我们开发,实施和测试多流连续和离散隐马尔可夫模型(HMM)算法。对于离散情况,我们提出了两种新方法来概括基线离散HMM。第一个结合了无监督学习,特征识别,标准离散HMM和加权距离,以针对每个符号学习具有特征相关权重的码本。第二种方法包括修改HMM结构以包括流相关权重,泛化标准离散Baum-Welch学习算法以及导出同时优化所有模型参数的必要条件。我们还概括了最小分类误差(MCE)判别训练算法,以包括流相关权重。

著录项

  • 作者

    Missaoui, Oualid.;

  • 作者单位

    University of Louisville.;

  • 授予单位 University of Louisville.;
  • 学科 Artificial Intelligence.;Computer Science.
  • 学位 Ph.D.
  • 年度 2010
  • 页码 178 p.
  • 总页数 178
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号