...
首页> 外文期刊>Proceedings of the IEEE >A statistical approach to learning and generalization in layered neural networks
【24h】

A statistical approach to learning and generalization in layered neural networks

机译:分层神经网络中学习和泛化的一种统计方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

A general statistical description of the problem of learning from examples is presented. Learning in layered networks is posed as a search in the network parameter space for a network that minimizes an additive error function of a statistically independent examples. By imposing the equivalence of the minimum error and the maximum likelihood criteria for training the network, the Gibbs distribution on the ensemble of networks with a fixed architecture is derived. The probability of correct prediction of a novel example can be expressed using the ensemble, serving as a measure to the network's generalization ability. The entropy of the prediction distribution is shown to be a consistent measure of the network's performance. The proposed formalism is applied to the problems of selecting an optimal architecture and the prediction of learning curves.
机译:给出了从示例中学习问题的一般统计描述。分层网络中的学习被视为在网络参数空间中对网络的搜索,该搜索将统计独立示例的附加误差函数最小化。通过施加最小误差和最大似然准则的等效性来训练网络,可以得出具有固定体系结构的网络整体上的吉布斯分布。可以使用集合来表达对新示例的正确预测的概率,以作为对网络泛化能力的度量。预测分布的熵显示为网络性能的一致量度。所提出的形式主义应用于选择最佳架构和学习曲线预测的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号