【24h】

To stop learning using the evidence

机译:停止使用证据学习

获取原文
获取原文并翻译 | 示例

摘要

Theoretical and practical aspects of Multi-Layer Perceptron (MLP) learning methods from the bayesian perspective were addressed for the first time by David MacKay in 1991. In this framework, the learning algorithm is an iterative process which alternates optimization of weights and estimation of hyperparameters, such as weight decay parameters. Moreover, trained MLPs that generalize better have higher "evidence", a probability which quantifies how well a MLP is adapted to a problem. This paper suggests and motivates a new methodology that computes the evidence during learning for different MLP configurations. Such estimations and the confidence intervals on test set are used to rank MLP configurations and, then, to stop learning. This learning strategy is illustrated on classification problems.
机译:1991年,David MacKay首次从贝叶斯角度探讨了多层感知器(MLP)学习方法的理论和实践问题。在此框架中,学习算法是一个迭代过程,它交替进行权重优化和超参数估计。 ,例如重量衰减参数。此外,具有更好概括性的受过训练的MLP具有更高的“证据”,这是一种量化MLP对问题的适应程度的概率。本文提出并激发了一种新的方法,该方法可以在学习过程中针对不同的MLP配置计算证据。测试集上的此类估计和置信区间用于对MLP配置进行排名,然后停止学习。在分类问题上说明了这种学习策略。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号