Neural associative memories are single layer perceptrons with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. For linear learning such as employed in Hopfield-type networks it is well known that the so-called covariance rule is optimal resulting in minimal output noise and maximal storage capacity. On the other hand, numerical simulations suggest that nonlinear rules such as clipped Hebbian learning in Willshaw-type networks perform better, at least for sparse neural activity and finite network size. Here I show that the Willshaw and Hopfield models are only limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity the Bayesian model becomes identical to an inhibitory implementation of the Willshaw model. Similarly, for less sparse patterns, the Bayesian model becomes identical to the Hopfield network employing the covariance rule. For intermediate sparseness or finite networks the optimal Bayesian rule differs from both the Willshaw and Hopfield models and can significantly improve memory performance.
展开▼