首页> 美国卫生研究院文献>PLoS Computational Biology >General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain
【2h】

General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain

机译:普通的差分Hebbian学习:捕获神经网络和大脑中的事件之间的时间关系

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Learning in biologically relevant neural-network models usually relies on Hebb learning rules. The typical implementations of these rules change the synaptic strength on the basis of the co-occurrence of the neural events taking place at a certain time in the pre- and post-synaptic neurons. Differential Hebbian learning (DHL) rules, instead, are able to update the synapse by taking into account the temporal relation, captured with derivatives, between the neural events happening in the recent past. The few DHL rules proposed so far can update the synaptic weights only in few ways: this is a limitation for the study of dynamical neurons and neural-network models. Moreover, empirical evidence on brain spike-timing-dependent plasticity (STDP) shows that different neurons express a surprisingly rich repertoire of different learning processes going far beyond existing DHL rules. This opens up a second problem of how capturing such processes with DHL rules. Here we propose a general DHL (G-DHL) rule generating the existing rules and many others. The rule has a high expressiveness as it combines in different ways the pre- and post-synaptic neuron signals and derivatives. The rule flexibility is shown by applying it to various signals of artificial neurons and by fitting several different STDP experimental data sets. To these purposes, we propose techniques to pre-process the neural signals and capture the temporal relations between the neural events of interest. We also propose a procedure to automatically identify the rule components and parameters that best fit different STDP data sets, and show how the identified components might be used to heuristically guide the search of the biophysical mechanisms underlying STDP. Overall, the results show that the G-DHL rule represents a useful means to study time-sensitive learning processes in both artificial neural networks and brain.
机译:在生物学相关的神经网络模型中学习通常依赖于Hebb学习规则。这些规则的典型实现方式是根据突触前和突触后神经元在特定时间发生的神经事件的同时发生来改变突触强度。相反,差分赫比学习(DHL)规则能够通过考虑最近发生的神经事件之间的时间关系(通过导数捕获)来更新突触。迄今为止提出的少数DHL规则只能以几种方式更新突触权重:这是对动态神经元和神经网络模型的研究的局限性。此外,关于脑穗定时依赖的可塑性(STDP)的经验证据表明,不同的神经元表达了令人惊讶的丰富的不同学习过程的曲目,这些曲目远远超出了现有的DHL规则。这就提出了第二个问题,即如何使用DHL规则捕获此类过程。在这里,我们提出了一个通用的DHL(G-DHL)规则,该规则可生成现有规则以及许多其他规则。该规则具有很高的表达力,因为它以不同的方式组合了突触前后的神经元信号和导数。通过将规则应用于人工神经元的各种信号并拟合几个不同的STDP实验数据集,可以显示规则的灵活性。为了这些目的,我们提出了预处理神经信号并捕获感兴趣的神经事件之间的时间关系的技术。我们还提出了一种程序,该程序可自动识别最适合不同STDP数据集的规则组件和参数,并展示如何使用已识别的组件来启发性地指导STDP底层生物物理机制的搜索。总体而言,结果表明,G-DHL规则代表了一种在人工神经网络和大脑中研究时间敏感型学习过程的有用手段。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号