首页> 外文会议>Neural and Stochastic Methods in Image and Signal Processing >Criterion for correct recalls in associative-memory neural networks
【24h】

Criterion for correct recalls in associative-memory neural networks

机译:联想记忆神经网络中正确召回的准则

获取原文
获取原文并翻译 | 示例

摘要

Abstract: A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis. !11
机译:摘要:提出了一种新的关联记忆神经网络加权外积学习(WOPL)方案。在该方案中,为每个基本内存分配了学习权重,以指导其正确的调用。 Hopfield模型和多种训练模型都是具有特定学习权重集的WOPL模型的实例。通过神经动力学获得了为WOPL模型的收敛性选择学习权重的必要条件。提出了一种选择学习权重以对基本记忆进行正确联想回忆的准则。在本文中,设计了一个重要的参数,称为信噪比增益(SNRG),根据经验发现,SNRG具有自己的阈值,这意味着当其对应的SNRG大于或等于时,可以正确地调用任何基本内存。等于其阈值。此外,给出了一个定理,并获得了一些有关SNRGs条件和学习权重的理论结果,以实现WOPL模型的良好关联召回性能。原则上,当选择的所有SNRG或学习权重均满足理论获得的条件时,WOPL模型的渐近存储容量将在AMNN的某些已知随机含义下以最大速率增长,因此,WOPL模型可以针对所有基本变量实现正确的召回回忆。代表性的计算机仿真证实了判据和理论分析。 !11

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号