首页> 外文期刊>IEEE Transactions on Neural Networks >On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks
【24h】

On Adaptive Learning Rate That Guarantees Convergence in Feedforward Networks

机译:保证前馈网络收敛的自适应学习速率

获取原文
获取原文并翻译 | 示例

摘要

This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.
机译:本文研究了基于Lyapunov函数的新学习算法(LF I和LF II),用于训练前馈神经网络。可以发现,这种算法与流行的反向传播(BP)算法具有相似的趣味性,在后者中,固定学习率被使用基于Lyapunov稳定性理论的收敛定理计算的自适应学习率所代替。 LF II是LF I的修改版本,旨在避免局部最小值。在某些情况下,此修改还有助于提高收敛速度。已经详细研究了实现这类算法的全局最小值的条件。在三种基准函数逼近问题:XOR,3位奇偶校验和8-3编码器上,将所提出算法的性能与BP算法和扩展卡尔曼滤波(EKF)进行了比较。根据学习迭代的次数和收敛所需的计算时间进行比较。发现所提出的算法(LF I和II)在收敛方面比其他两种算法快得多,以达到相同的精度。最后,对复杂的二维(2-D)Gabor函数进行了比较,并验证了自适应学习速率对更快收敛的影响。简而言之,本文的研究有助于我们从自适应学习率,收敛速度和局部最小值方面更好地了解前馈神经网络的学习过程。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号