首页> 外文期刊>IEEE Transactions on Neural Networks >Global Convergence of GHA Learning Algorithm With Nonzero-Approaching Adaptive Learning Rates
【24h】

Global Convergence of GHA Learning Algorithm With Nonzero-Approaching Adaptive Learning Rates

机译:具有非零接近自适应学习率的GHA学习算法的全局收敛性

获取原文
获取原文并翻译 | 示例
       

摘要

The generalized Hebbian algorithm (GHA) is one of the most widely used principal component analysis (PCA) neural network (NN) learning algorithms. Learning rates of GHA play important roles in convergence of the algorithm for applications. Traditionally, the learning rates of GHA are required to converge to zero so that its convergence can be analyzed by studying the corresponding deterministic continuous-time (DCT) equations. However, the requirement for learning rates to approach zero is not a practical one in applications due to computational roundoff limitations and tracking requirements. In this paper, nonzero-approaching adaptive learning rates are proposed to overcome this problem. These proposed adaptive learning rates converge to some positive constants, which not only speed up the algorithm evolution considerably, but also guarantee global convergence of the GHA algorithm. The convergence is studied in detail by analyzing the corresponding deterministic discrete-time (DDT) equations. Extensive simulations are carried out to illustrate the theory.
机译:广义Hebbian算法(GHA)是应用最广泛的主成分分析(PCA)神经网络(NN)学习算法之一。 GHA的学习率在应用算法的收敛中起着重要作用。传统上,要求GHA的学习率收敛到零,以便可以通过研究相应的确定性连续时间(DCT)方程来分析其收敛性。但是,由于计算舍入限制和跟踪要求,对于学习率接近零的要求在应用中并不实用。本文提出了一种非零逼近的自适应学习率来克服这个问题。这些提出的自适应学习率收敛于一些正常数,这不仅大大加快了算法的发展,而且保证了GHA算法的全局收敛性。通过分析相应的确定性离散时间(DDT)方程,详细研究了收敛性。进行了广泛的仿真以说明该理论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号