...
首页> 外文期刊>IEEE Transactions on Neural Networks >A Normalized Adaptive Training of Recurrent Neural Networks With Augmented Error Gradient
【24h】

A Normalized Adaptive Training of Recurrent Neural Networks With Augmented Error Gradient

机译:具有增加误差梯度的递归神经网络的归一化自适应训练

获取原文
获取原文并翻译 | 示例

摘要

For training algorithms of recurrent neural networks (RNN), convergent speed and training error are always two contradictory performances. In this letter, we propose a normalized adaptive recurrent learning (NARL) to obtain a tradeoff between transient and steady-state response. An augmented term is added to error gradient to exactly model the derivative of cost function with respect to hidden layer weight. The influence of the induced gain of activation function on training stability is also taken into consideration. Moreover, adaptive learning rate is employed to improve the robustness of the gradient training. Fianlly, computer simulations of a model prediction problem are synthesized to give comparisons between NARL and conventional normalized real-time recurrent learning (N-RTRL).
机译:对于递归神经网络(RNN)的训练算法,收敛速度和训练误差始终是两个相互矛盾的性能。在这封信中,我们提出了归一化的自适应递归学习(NARL),以在瞬态响应和稳态响应之间进行权衡。将扩展项添加到误差梯度,以精确建模成本函数相对于隐藏层权重的导数。还考虑了激活功能的诱导增益对训练稳定性的影响。此外,采用自适应学习率来提高梯度训练的鲁棒性。最后,将模型预测问题的计算机模拟进行合成,以在NARL和常规归一化实时循环学习(N-RTRL)之间进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号