首页> 外文OA文献 >General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain
【2h】

General differential Hebbian learning: Capturing temporal relations between events in neural networks and the brain

机译:一般差分休息学习:捕获神经网络和大脑事件之间的时间关系

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Learning in biologically relevant neural-network models usually relies on Hebb learning rules. The typical implementations of these rules change the synaptic strength on the basis of the co-occurrence of the neural events taking place at a certain time in the pre- and post-synaptic neurons. Differential Hebbian learning (DHL) rules, instead, are able to update the synapse by taking into account the temporal relation, captured with derivatives, between the neural events happening in the recent past. The few DHL rules proposed so far can update the synaptic weights only in few ways: this is a limitation for the study of dynamical neurons and neural-network models. Moreover, empirical evidence on brain spike-timing-dependent plasticity (STDP) shows that different neurons express a surprisingly rich repertoire of different learning processes going far beyond existing DHL rules. This opens up a second problem of how capturing such processes with DHL rules. Here we propose a general DHL (G-DHL) rule generating the existing rules and many others. The rule has a high expressiveness as it combines in different ways the pre- and post-synaptic neuron signals and derivatives. The rule flexibility is shown by applying it to various signals of artificial neurons and by fitting several different STDP experimental data sets. To these purposes, we propose techniques to pre-process the neural signals and capture the temporal relations between the neural events of interest. We also propose a procedure to automatically identify the rule components and parameters that best fit different STDP data sets, and show how the identified components might be used to heuristically guide the search of the biophysical mechanisms underlying STDP. Overall, the results show that the G-DHL rule represents a useful means to study time-sensitive learning processes in both artificial neural networks and brain.
机译:在生物学上相关的神经网络模型中学习通常依赖于HEBB学习规则。这些规则的典型实施是基于在突触前和后后神经元的一定时间发生的神经事件的共同发生的突触强度。相反,差分Hebbian学习(DHL)规则能够通过考虑到最近过去发生的神经事件之间的衍生品捕获的时间关系来更新突触。迄今为止提出的少数DHL规则只需几种方式即可更新突触权重:这是对动态神经元和神经网络模型的研究的限制。此外,关于脑尖峰时序依赖塑性(STDP)的经验证据表明,不同的神经元表达了一个令人惊讶的丰富的不同学习过程,远远超出现有的DHL规则。这为DHL规则捕获此类进程的第二个问题。在这里,我们提出了一般的DHL(G-DHL)规则,产生现有规则和许多其他规则。该规则具有高表达性,因为它以不同的方式结合了突触后神经元信号和衍生物。通过将其应用于人造神经元的各种信号并通过拟合几种不同的STDP实验数据集来示出规则灵活性。为了这些目的,我们提出了预先处理神经信号的技术并捕获了感兴趣的神经事件之间的时间关系。我们还提出了一种过程,可以自动识别最佳适合不同的STDP数据集的规则组件和参数,并展示所识别的组件如何用于启发式指导STDP底层的生物物理机制的搜索。总体而言,结果表明,G-DHL规则代表了研究人工神经网络和大脑中的时间敏感学习过程的有用手段。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号