首页> 外文期刊>Neurocomputing >A conjugate gradient learning algorithm for recurrent neural networks
【24h】

A conjugate gradient learning algorithm for recurrent neural networks

机译:递归神经网络的共轭梯度学习算法

获取原文
获取原文并翻译 | 示例

摘要

The real-time recurrent learning (RTRL) algorithm, which is originally proposed for training recurrent neural networks, requires a large number of iterations for convergence because a small learning rate should be used. While an obvious solution to this problem is to use a large learning rate, this could result in undesirable convergence characteristics. This paper attempts to improve the convergence capability and convergence characteristics of the RTRL algorithm by incorporating conjugate gradient computation into its learning procedure. The resulting algorithm, referred to as the conjugate gradient recurrent learning (CGRL) algorithm, is applied to train fully connected recurrent neural networks to simulate a second-order low-pass filter and to predict the chaotic intensity pulsations of NH3 laser. Results show that the CGRL algorithm exhibits substantial improvement in convergence (in terms of the reduction in mean squared error per epoch) as compared to the RTRL and batch mode RTRL algorithms.
机译:最初提出用于训练递归神经网络的实时递归学习(RTRL)算法由于需要使用小的学习速率,因此需要进行大量迭代才能收敛。解决此问题的一个明显方法是使用较大的学习率,但这可能会导致不良的收敛特性。本文试图通过将共轭梯度计算纳入其学习过程来提高RTRL算法的收敛能力和收敛特性。所得的算法称为共轭梯度递归学习(CGRL)算法,用于训练完全连接的递归神经网络,以模拟二阶低通滤波器并预测NH3激光器的混沌强度脉动。结果表明,与RTRL和批处理模式RTRL算法相比,CGRL算法在收敛性方面有显着改善(就每个时期的均方误差的降低而言)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号