首页> 外文期刊>IEEE Transactions on Neural Networks >Learning algorithms for feedforward networks based on finite samples
【24h】

Learning algorithms for feedforward networks based on finite samples

机译:基于有限样本的前馈网络学习算法

获取原文
获取原文并翻译 | 示例

摘要

We present two classes of convergent algorithms for learning continuous functions and regressions that are approximated by feedforward networks. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. (1970). The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods (1951). Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.
机译:我们提出了两类收敛算法,用于学习前馈网络近似的连续函数和回归。通过利用Aizerman等人的潜在函数方法,可以将第一类算法应用于仅在输出层中具有未知权重的网络。 (1970)。第二类适用于一般前馈网络,它是利用经典的Robbins-Monro风格的随机逼近方法获得的(1951年)。使用mar型不等式为两类算法推导了将样本大小与误差范围相关的条件。为了具体起见,以神经网络的形式进行了讨论,但是结果适用于一般的前馈网络,特别是小波网络。该算法可以直接适用于概念学习问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号