首页> 外文期刊>IEEE Transactions on Neural Networks >New results on recurrent network training: unifying the algorithms and accelerating convergence
【24h】

New results on recurrent network training: unifying the algorithms and accelerating convergence

机译:递归网络训练的新结果:统一算法并加速收敛

获取原文
获取原文并翻译 | 示例

摘要

How to efficiently train recurrent networks remains a challenging and active research topic. Most of the proposed training approaches are based on computational ways to efficiently obtain the gradient of the error function, and can be generally grouped into five major groups. In this study we present a derivation that unifies these approaches. We demonstrate that the approaches are only five different ways of solving a particular matrix equation. The second goal of this paper is develop a new algorithm based on the insights gained from the novel formulation. The new algorithm, which is based on approximating the error gradient, has lower computational complexity in computing the weight update than the competing techniques for most typical problems. In addition, it reaches the error minimum in a much smaller number of iterations. A desirable characteristic of recurrent network training algorithms is to be able to update the weights in an online fashion. We have also developed an online version of the proposed algorithm, that is based on updating the error gradient approximation in a recursive manner.
机译:如何有效地训练循环网络仍然是一个充满挑战和积极的研究课题。大多数提出的训练方法都是基于有效地获得误差函数梯度的计算方法,通常可以分为五个主要组。在这项研究中,我们提出了一个统一这些方法的推导。我们证明了这些方法只是解决特定矩阵方程的五种不同方法。本文的第二个目标是根据从新公式中获得的见识开发一种新算法。基于近似误差梯度的新算法,在权重更新中的计算复杂度比大多数典型问题的竞争技术低。此外,它以更少的迭代次数达到了最小误差。循环网络训练算法的理想特征是能够以在线方式更新权重。我们还开发了该算法的在线版本,该算法基于以递归方式更新误差梯度近似值。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号