【24h】

Memory in Backpropagation-Decorrelation O(N) Efficient Online Recurrent Learning

机译:反向传播-去相关O(N)有效在线递归学习中的记忆

获取原文
获取原文并翻译 | 示例

摘要

We consider regularization methods to improve the recently introduced backpropagation-decorrelation (BPDC) online algorithm for O(N) training of fully recurrent networks. While BPDC combines one-step error backpropa-gation and the usage of temporal memory of a network dynamics by means of decorrelation of activations, it is an online algorithm using only instantaneous states and errors. As enhancement we propose several ways to introduce memory in the algorithm for regularization. Simulation results of standard tasks show that different such strategies cause different effects either improving training performance at the cost of overfitting or degrading training errors.
机译:我们考虑使用正则化方法来改进最近引入的反向传播-去相关(BPDC)在线算法,以进行完全递归网络的O(N)训练。 BPDC通过激活的去相关将一步错误反向传播和网络动态时间存储器的使用结合起来,但它是仅使用瞬时状态和错误的在线算法。作为增强,我们提出了几种在正则化算法中引入内存的方法。标准任务的仿真结果表明,不同的此类策略会产生不同的效果,要么以过度拟合为代价提高培训性能,要么降低培训错误。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号