首页> 外文期刊>IEEE Transactions on Neural Networks >A modified Hopfield auto-associative memory with improved capacity
【24h】

A modified Hopfield auto-associative memory with improved capacity

机译:修改后的Hopfield自关联内存,具有改进的容量

获取原文
获取原文并翻译 | 示例
           

摘要

This paper describes a new procedure to implement a recurrent neural network (RNN), based on a new approach to the well-known Hopfield autoassociative memory. In our approach a RNN is seen as a complete graph G and the learning mechanism is also based on Hebb's law, but with a very significant difference: the weights, which control the dynamics of the net, are obtained by coloring the graph G. Once the training is complete, the synaptic matrix of the net will be the weight matrix of the graph. Any one of these matrices will fulfil some spatial properties, for this reason they will be referred to as tetrahedral matrices. The geometrical properties of these tetrahedral matrices may be used for classifying the n-dimensional state-vector space in n classes. In the recall stage, a parameter vector is introduced, which is related with the capacity of the network. It may be shown that the bigger the value of the ith component of the parameter vector is, the lower the capacity of the [i] class of the state-vector space becomes. Once the capacity has been controlled, a new set of parameters that uses the statistical deviation of the prototypes to compare them with those that appear as fixed points is introduced, eliminating thus a great number of parasitic fixed points.
机译:本文基于对著名的Hopfield自联想记忆的新方法,描述了一种实现递归神经网络(RNN)的新过程。在我们的方法中,RNN被视为一个完整的图G,并且学习机制也基于赫布定律,但有一个非常显着的区别:通过为图G着色来获得控制网络动态的权重。训练完成后,网络的突触矩阵将成为图的权重矩阵。这些矩阵中的任何一个都将满足某些空间特性,因此,它们将被称为四面体矩阵。这些四面体矩阵的几何特性可用于将n维状态向量空间分类为n类。在召回阶段,引入了一个参数向量,它与网络的容量有关。可以表明,参数向量的第i个分量的值越大,状态向量空间的[i]类的容量就越小。一旦控制了容量,就引入了一组新参数,这些参数使用原型的统计偏差将它们与出现在固定点上的参数进行比较,从而消除了大量的寄生固定点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号