【24h】

Gradient Descent Training of Radial Basis Functions

机译:径向基函数的梯度下降训练

获取原文
获取原文并翻译 | 示例

摘要

In this paper we present experiments comparing different training algorithms for Radial Basis Functions (RBF) neural networks. In particular we compare the classical training which consist of a unsupervised training of centers followed by a supervised training of the weights at the output, with the full supervised training by gradient descent proposed recently in same papers. We conclude that a fully supervised training performs generally better. We also compare Batch training with Online training of fully supervised training and we conclude that Online training suppose a reduction in the number of iterations and therefore increase the speed of convergence.
机译:在本文中,我们提出了针对径向基函数(RBF)神经网络比较不同训练算法的实验。特别是,我们将经典训练(包括对中心的无监督训练,然后对输出的权重进行有针对性的训练)与最近在同一论文中提出的通过梯度下降进行的全监督训练进行了比较。我们得出的结论是,受完全监督的培训通常会更好。我们还将批量培训与完全受监督的培训与在线培训进行了比较,并得出结论,在线培训假设减少了迭代次数,因此提高了收敛速度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号