...
首页> 外文期刊>Soft Computing - A Fusion of Foundations, Methodologies and Applications >Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training
【24h】

Speeding up the scaled conjugate gradient algorithm and its application in neuro-fuzzy classifier training

机译:加快尺度共轭梯度算法的应用及其在神经模糊分类器训练中的应用

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The aim of this study is to speed up the scaled conjugate gradient (SCG) algorithm by shortening the training time per iteration. The SCG algorithm, which is a supervised learning algorithm for network-based methods, is generally used to solve large-scale problems. It is well known that SCG computes the second-order information from the two first-order gradients of the parameters by using all the training datasets. In this case, the computation cost of the SCG algorithm per iteration is more expensive for large-scale problems. In this study, one of the first-order gradients is estimated from the previously calculated gradients without using the training dataset. To estimate this gradient, a least square error estimator is applied. The estimation complexity of the gradient is much smaller than the computation complexity of the gradient for large-scale problems, because the gradient estimation is independent of the size of dataset. The proposed algorithm is applied to the neuro-fuzzy classifier and the neural network training. The theoretical basis for the algorithm is provided, and its performance is illustrated by its application to several examples in which it is compared with several training algorithms and well-known datasets. The empirical results indicate that the proposed algorithm is quicker per iteration time than the SCG. The algorithm decreases the training time by 20–50% compared to SCG; moreover, the convergence rate of the proposed algorithm is similar to SCG. Keywords Speeding up learning - Gradient estimation - The scaled conjugate gradient algorithm - Neuro-fuzzy classifier - Neural network - Large-scale problems
机译:这项研究的目的是通过缩短每次迭代的训练时间来加速缩放共轭梯度(SCG)算法。 SCG算法是一种基于网络方法的监督学习算法,通常用于解决大规模问题。众所周知,SCG通过使用所有训练数据集从两个参数的一阶梯度中计算出二阶信息。在这种情况下,SCG算法每次迭代的计算成本对于大规模问题而言更为昂贵。在这项研究中,不使用训练数据集就可以从先前计算的梯度中估算出一阶梯度。为了估计该梯度,应用最小二乘误差估计器。对于大规模问题,梯度的估计复杂度比梯度的计算复杂度小得多,因为梯度估计与数据集的大小无关。将该算法应用于神经模糊分类器和神经网络训练。提供了该算法的理论基础,并通过将其应用到几个示例中来说明其性能,并与几种训练算法和知名数据集进行了比较。实验结果表明,该算法在每次迭代时间内比SCG更快。与SCG相比,该算法将训练时间减少了20–50%;此外,该算法的收敛速度与SCG相似。关键词加速学习-梯度估计-比例共轭梯度算法-神经模糊分类器-神经网络-大规模问题

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号