首页> 外文期刊>IEEE Transactions on Neural Networks >Radial basis function networks and complexity regularization in function learning
【24h】

Radial basis function networks and complexity regularization in function learning

机译:径向基函数网络和函数学习中的复杂度正则化

获取原文
获取原文并翻译 | 示例

摘要

We apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l/sub 1/ metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.
机译:我们应用复杂度正则化方法,以使用单个隐藏层径向基函数网络为非线性函数估计导出估计边界。我们的方法与以前的复杂度正则化神经网络函数学习方案不同,在于我们使用随机覆盖数和l / sub 1 /度量熵进行运算,从而有可能考虑更广泛的激活函数家族,即有界变异函数。这种方式也消除了先前施加于网络参数的一些限制。通过涉及经验风险最小化的复杂性正则化来训练网络。对于一大类损失函数,获得了样本数量方面的预期风险界限。还可以得出收敛到最佳损耗的速率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号