首页> 外文会议>International conference on text, speech and dialogue >Anti-Models: An Alternative Way to Discriminative Training
【24h】

Anti-Models: An Alternative Way to Discriminative Training

机译:反模型:判别训练的另一种方法

获取原文
获取外文期刊封面目录资料

摘要

Traditional discriminative training methods modify Hidden Markov Model (HMM) parameters obtained via a Maximum Likelihood (ML) criterion based estimator. In this paper, anti-models are introduced instead. The anti-models are used in tandem with ML models to incorporate a discriminative information from training data set and modify the HMM output likelihood in a discriminative way. Traditional discriminative training methods are prone to over-fitting and require an extra stabilization. Also, convergence is not ensured and usually "a proper" number of iterations is done. In the proposed anti-models concept, two parts, positive model and anti-model, are trained via ML criterion. Therefore, the convergence and the stability are ensured.
机译:传统的判别训练方法会修改通过基于最大似然(ML)准则的估算器获得的隐马尔可夫模型(HMM)参数。本文引入了反模型。反模型与ML模型一起使用,以合并来自训练数据集的判别信息,并以判别方式修改HMM输出可能性。传统的歧视性训练方法容易过度拟合,并且需要额外的稳定性。而且,不能确保收敛,通常会完成“适当”的迭代次数。在提出的反模型概念中,通过ML准则对正模型和反模型两部分进行了训练。因此,确保了收敛性和稳定性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号