首页> 外文会议>IEEE-SP Workshop on Neural Networks for Processing >Generalization and maximum likelihood from small data sets
【24h】

Generalization and maximum likelihood from small data sets

机译:小数据集的泛化和最大可能性

获取原文

摘要

A technique is described which can be used to prevent overtraining and encourage generalization in training under a maximum likelihood criterion. Applications to Boltzmann machines and hidden Markov models (HMMs) are discussed. While the confidence constraint may slow the training algorithm, in general it should involve very little additional calculation. The results presented for HMMs are for training under a maximum likelihood criterion based on the marginal distribution. Similar modifications can be made to the segmental K-means and N-best algorithms.
机译:描述了一种技术,其可用于防止在最大似然标准下训练的过度训练和促进训练的概括。讨论了博尔兹曼机器和隐藏马尔可夫模型(HMMS)的应用。虽然置信约束可能会减慢训练算法,但一般来说它应该涉及很少的额外计算。为HMMS提供的结果用于基于边际分布的最大似然标准下进行训练。可以对分段k-means和n最佳算法进行类似的修改。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号