...
首页> 外文期刊>Neural computation >Memory Capacities for Synaptic and Structural Plasticity
【24h】

Memory Capacities for Synaptic and Structural Plasticity

机译:突触和结构可塑性的记忆能力

获取原文
           

摘要

Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for information-theoretic capacity in associative memory that also provide a theoretical benchmark.rnIn neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically, we analyze operating regimes in the Willshaw model in which structural plasticity can compress the network structure and push performance to the theoretical benchmark. The amount C of information stored in each synapse can scale with the logarithm of the network size rather than being constant, as in classical Willshaw and Hopfield nets (≤ In 2 ≈ 0.7). Further, the review contains novel technical material: a capacity analysis of the Willshaw model that rigorously controls for the level of retrieval quality, an analysis for memories with a nonconstant number of active units (where C ≤ 1/e In 2 ≈ 0.53), and the analysis of the computational complexity of associative memories with and without network compression.
机译:具有塑料突触的神经关联网络已经被提出作为脑功能的计算模型,并且还被用于诸如模式识别和信息检索的应用。为了指导生物学模型和优化技术应用,已使用几种存储容量的定义来衡量关联存储的效率。在这里,我们解释了为什么当前使用的性能指标偏向于模型之间的比较,而不能作为理论基准。我们为关联记忆中的信息理论能力引入了合理的量度,该量度也提供了理论基准。在神经网络中,可以识别出两种类型的操纵突触:突触可塑性,现有突触强度的改变以及结构可塑性,创造和修剪突触。我们介绍的一种新型存储容量可以量化结构可塑性如何通过压缩网络结构(例如,修剪未使用的突触)来提高网络效率。具体来说,我们在Willshaw模型中分析了运行机制,其中结构可塑性可以压缩网络结构并将性能提高到理论基准。每个突触中存储的信息量C可以与网络大小的对数成比例,而不是像经典的Willshaw和Hopfield网络(≤In 2≈0.7)那样是恒定的。此外,该评论还包含新颖的技术材料:严格控制检索质量水平的Willshaw模型的容量分析,有效单位数量非恒定(C≤1 / e In 2≈0.53)的内存分析,以及在有或没有网络压缩的情况下关联存储器的计算复杂度的分析。

著录项

  • 来源
    《Neural computation》 |2010年第2期|289-341|共53页
  • 作者单位

    Honda Research Institute Europe GmbH, D-63073 Offenbach, Germany;

    Institut fuer Neuroinformatik, Fakultaet fuer Ingenieurwissenschaften und Informatik, Universitaet Ulm, D-89069 Ulm, Germany;

    University of California at Berkeley, Redwood Center for Theoretical Neuroscience, Berkeley, CA 94720-3220, U.S.A.;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号