首页> 美国卫生研究院文献>PLoS Computational Biology >A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks
【2h】

A Three-Threshold Learning Rule Approaches the Maximal Capacity of Recurrent Neural Networks

机译:三阈学习规则逼近循环神经网络的最大容量

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model simplicity and the locality of the synaptic update rules come at the cost of a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns to be memorized are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.
机译:理解如何在神经种群中编码和检索记忆的理论基础是神经科学的一项主要挑战。记忆函数建模的一种流行理论场景是吸引子神经网络场景,其原型是Hopfield模型。与使用感知器学习算法获得的容量相比,模型的简单性和突触更新规则的局部性以牺牲较差的存储容量为代价。在这里,通过变换感知器学习规则,我们提出了一种递归神经网络的在线学习规则,该递归神经网络仅依靠本地可访问的信息即可获得几乎最大的存储容量,而无需明确的管理错误信号。完全连接的网络由具有可塑性递归连接的兴奋性二元神经元和稳定网络动态的非可塑性抑制反馈组成。要记忆的记忆模式以强传入电流的形式在线呈现,为神经元突触输入产生双峰分布。对应于有效输入的突触相对于三个阈值根据局部字段的值进行修改。高于最高阈值且低于最低阈值,不会发生可塑性。在这两个阈值之间,当局部场在中间阈值之上/之下时,会发生增强/抑制。我们模拟并分析了执行该规则的二进制神经元网络,并测量了其对不同大小的吸引盆地的存储容量。通过数值模拟获得的存储容量显示接近于通过分析计算预测的值。我们还测量了能力对外部投入强度的依赖性。最后,我们量化了所得突触连接矩阵的统计量,发现零权重突触的比例和权重矩阵的对称度都随存储模式的增加而增加。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号