首页> 外文学位 >Sample complexity and generalization in feedforward neural networks.
【24h】

Sample complexity and generalization in feedforward neural networks.

机译:前馈神经网络中的样本复杂性和一般化。

获取原文
获取原文并翻译 | 示例

摘要

Artificial neural network applications are usually tuned up by experimentation. Several combinations of network parameters and size of the training set are evaluated until a reasonable good performance of the network is achieved. Our prediction of the behaviour of a neural network is so weak, that frequently there is no other choice than to adjust its parameters by trial and error. That is an approach we wished to be part of the past. In this dissertation we should characterize the learning process of a neural network as the probability of generation of the correct (or near correct) output, for samples never seen during the training phase. Statistical Learning Theory and Probably Approximately Correct learning provide the basic model for reasoning about measurable properties of a learning machine. Sample complexity, that is, training set size, machine size, training error, generalization error, and confidence, are woven in a single equation to describe the learning properties of a learning machine. We derive theoretical bounds to the sample complexity and generalization of linear basis neural networks, and radial basis neural networks.
机译:人工神经网络的应用通常可以通过实验进行调整。评估网络参数和训练集大小的几种组合,直到获得合理的网络性能为止。我们对神经网络行为的预测是如此之弱,以至于除了通过反复试验来调整其参数外,别无选择。这是我们希望成为过去的一部分的方法。在本文中,对于训练阶段从未见过的样本,我们应该将神经网络的学习过程描述为生成正确(或接近正确)输出的概率。统计学习理论和大概正确的学习为推理学习机的可测量特性提供了基本模型。样本复杂度,即训练集大小,机器大小,训练误差,泛化误差和置信度,被编织成一个方程式来描述学习机器的学习特性。我们得出样本复杂度和线性基础神经网络和径向基础神经网络的推广的理论界限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号