首页> 外文期刊>Neural computation >Strongly Improved Stability and Faster Convergence of Temporal Sequence Learning by Using Input Correlations Only
【24h】

Strongly Improved Stability and Faster Convergence of Temporal Sequence Learning by Using Input Correlations Only

机译:仅使用输入相关性,大大改善了时间序列学习的稳定性和更快的收敛性

获取原文
获取原文并翻译 | 示例

摘要

Currently all important, low-level, unsupervised network learning algorithms follow the paradigm of Hebb, where input and output activity are correlated to change the connection strength of a synapse. However, as a consequence, classical Hebbian learning always carries a potentially destabilizing autocorrelation term, which is due to the fact that every input is in a weighted form reflected in the neuron's output. This self-correlation can lead to positive feedback, where increasing weights will increase the output, and vice versa, which may result in divergence. This can be avoided by different strategies like weight normalization or weight saturation, which, however, can cause different problems. Consequently, in most cases, high learning rates cannot be used for Hebbian learning, leading to relatively slow convergence. Here we introduce a novel correlation-based learning rule that is related to our isotropic sequence order (ISO) learning rule (Porr & Woergoetter, 2003a), but replaces the derivative of the output in the learning rule with the derivative of the reflex input. Hence, the new rule uses input correlations only, effectively implementing strict heterosynaptic learning. This looks like a minor modification but leads to dramatically improved properties. Elimination of the output from the learning rule removes the unwanted, destabilizing autocorrelation term, allowing us to use high learning rates. As a consequence, we can mathematically show that the theoretical optimum of one-shot learning can be reached under ideal conditions with the new rule. This result is then tested against four different experimental setups, and we will show that in all of them, very few (and sometimes only one) learning experiences are needed to achieve the learning goal. As a consequence, the new learning rule is up to 100 times faster and in general more stable than ISO learning.
机译:当前,所有重要的,低级,无监督的网络学习算法都遵循Hebb的范式,其中输入和输出活动被关联以改变突触的连接强度。但是,结果是,经典的Hebbian学习总是带有潜在的不稳定自相关项,这是由于每个输入都是神经元输出中反映的加权形式这一事实。这种自相关可以导致正反馈,权重的增加将增加输出,反之亦然,这可能会导致发散。可以通过权重归一化或权重饱和之类的不同策略来避免这种情况,但是这可能会导致不同的问题。因此,在大多数情况下,高学习率无法用于Hebbian学习,从而导致相对缓慢的收敛。在这里,我们介绍了一种新颖的基于相关性的学习规则,该规则与我们的各向同性序列顺序(ISO)学习规则有关(Porr&Woergoetter,2003a),但是将学习规则中输出的导数替换为反射输入的导数。因此,新规则仅使用输入相关性,从而有效地执行严格的异突触学习。这看起来像是次要的修改,但可以显着改善属性。从学习规则中消除输出将消除不必要的,不稳定的自相关项,从而使我们可以使用较高的学习率。结果,我们可以从数学上证明,通过新规则,可以在理想条件下达到单次学习的理论最优值。然后,针对四种不同的实验设置测试了此结果,我们将证明在所有这些实验设置中,只有很少(有时只有一种)学习经验才能达到学习目标。结果,新的学习规则比ISO学习快100倍,并且总体上更稳定。

著录项

  • 来源
    《Neural computation》 |2006年第6期|p.1380-1412|共33页
  • 作者单位

    Department of Electronics and Electrical Engineering, University of Glasgow, Glasgow, GT12 8LT, Scotland;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 人工智能理论;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号