首页> 外文会议>IAPR Workshop on Artificial Neural Networks in Pattern Recognition(ANNPR 2006); 20060831-0902; Ulm(DE) >An Experimental Study on Training Radial Basis Functions by Gradient Descent
【24h】

An Experimental Study on Training Radial Basis Functions by Gradient Descent

机译:梯度下降训练径向基函数的实验研究

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consist of an unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training and we conclude that Online training suppose a reduction in the number of iterations.
机译:在本文中,我们提出了针对径向基函数(RBF)神经网络比较不同训练算法的实验。特别是,我们将经典训练(包括对中心的无监督训练,然后对输出的权重进行有针对性的训练)与最近在同一论文中提出的通过梯度下降进行的全监督训练进行了比较。我们得出的结论是,受完全监督的培训通常会更好。我们还将批量培训与在线培训进行了比较,并得出结论,在线培训假设迭代次数减少了。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号