首页> 外文会议>International Joint Conference on Neural Networks >Optimal synaptic learning in non-linear associative memory
【24h】

Optimal synaptic learning in non-linear associative memory

机译:非线性关联记忆中的最佳突触学习

获取原文

摘要

Neural associative memories are single layer perceptrons with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. For linear learning such as employed in Hopfield-type networks it is well known that the so-called covariance rule is optimal resulting in minimal output noise and maximal storage capacity. On the other hand, numerical simulations suggest that nonlinear rules such as clipped Hebbian learning in Willshaw-type networks perform better, at least for sparse neural activity and finite network size. Here I show that the Willshaw and Hopfield models are only limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity the Bayesian model becomes identical to an inhibitory implementation of the Willshaw model. Similarly, for less sparse patterns, the Bayesian model becomes identical to the Hopfield network employing the covariance rule. For intermediate sparseness or finite networks the optimal Bayesian rule differs from both the Willshaw and Hopfield models and can significantly improve memory performance.
机译:神经关联存储器是具有快速突触学习的单层的彼得人,通常存储在神经活动模式对之间的离散关联。对于诸如Hopfield型网络中使用的线性学习,众所周知,所谓的协方差规则是最佳的输出噪声和最大存储容量。另一方面,数值模拟表明,在Willshaw型网络中剪裁Hebbian学习等非线性规则,至少对于稀疏神经活动和有限网络尺寸而言,执行更好。在这里,我认为Willshaw和Hopfield模型只是通过概率贝叶斯考虑确定突触学习的一般最佳模型的限制案例。渐近地,对于大型网络和非常稀疏的神经元活动,贝叶斯模型变得与Willshaw模型的抑制性实现相同。类似地,对于更少的稀疏模式,贝叶斯模型与采用协方差规则的Hopfield网络相同。对于中间稀疏或有限网络,最佳贝叶斯规则与Willshaw和Hopfield模型不同,并且可以显着提高内存性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号