首页> 外文OA文献 >Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons
【2h】

Information Representation and Computation of Spike Trains in Reservoir Computing Systems with Spiking Neurons and Analog Neurons

机译:具有尖峰神经元和模拟神经元的储层计算系统中尖峰列车的信息表示和计算

摘要

Real-time processing of space-and-time-variant signals is imperative for perception and real-world problem-solving. In the brain, spatio-temporal stimuli are converted into spike trains by sensory neurons and projected to the neurons in subcortical and cortical layers for further processing.Reservoir Computing (RC) is a neural computation paradigm that is inspired by cortical Neural Networks (NN). It is promising for real-time, on-line computation of spatio-temporal signals. An RC system incorporates a Recurrent Neural Network (RNN) called reservoir, the state of which is changed by a trajectory of perturbations caused by a spatio-temporal input sequence. A trained, non- recurrent, linear readout-layer interprets the dynamics of the reservoir over time. Echo-State Network (ESN) [1] and Liquid-State Machine (LSM) [2] are two popular and canonical types of RC system. The former uses non-spiking analog sigmoidal neurons – and, more recently, Leaky Integrator (LI) neurons – and a normalized random connectivity matrix in the reservoir. Whereas, the reservoir in the latter is composed of Leaky Integrate-and-Fire (LIF) neurons, distributed in a 3-D space, which are connected with dynamic synapses through a probability function.The major difference between analog neurons and spiking neurons is in their neuron model dynamics and their inter-neuron communication mechanism. However, RC systems share a mysterious common property: they exhibit the best performance when reservoir dynamics undergo a criticality [1–6] – governed by the reservoirs’ connectivity parameters, |λmax| ≈ 1 in ESN, λ ≈ 2 and w in LSM – which is referred to as the edge of chaos in [3–5]. In this study, we are interested in exploring the possible reasons for this commonality, despite the differences imposed by different neuron types in the reservoir dynamics.We address this concern from the perspective of the information representation in both spiking and non-spiking reservoirs. We measure the Mutual Information (MI) between the state of the reservoir and a spatio-temporal spike-trains input, as well as that, between the reservoir and a linearly inseparable function of the input, temporal parity. In addition, we derive Mean Cumulative Mutual Information (MCMI) quantity from MI to measure the amount of stable memory in the reservoir and its correlation with the temporal parity task performance. We complement our investigation by conducting isolated spoken-digit recognition and spoken-digit sequence-recognition tasks. We hypothesize that a performance analysis of these two tasks will agree with our MI and MCMI results with regard to the impact of stable memory in task performance.It turns out that, in all reservoir types and in all the tasks conducted, reservoir performance peaks when the amount of stable memory in the reservoir is maxi-mized. Likewise, in the chaotic regime (when the network connectivity parameter is greater than a critical value), the absence of stable memory in the reservoir seems to be an evident cause for performance decrease in all conducted tasks. Our results also show that the reservoir with LIF neurons possess a higher stable memory of the input (quantified by input-reservoir MCMI) and outperforms the reservoirs with analog sigmoidal and LI neurons in processing the temporal parity and spoken-digit recognition tasks. From an efficiency stand point, the reservoir with 100 LIF neurons outperforms the reservoir with 500 LI neurons in spoken- digit recognition tasks. The sigmoidal reservoir falls short of solving this task. The optimum input-reservoir MCMI’s and output-reservoir MCMI’s we obtained for the reservoirs with LIF, LI, and sigmoidal neurons are 4.21, 3.79, 3.71, and 2.92, 2.51, and 2.47 respectively. In our isolated spoken-digits recognition experiments, the maximum achieved mean-performance by the reservoirs with N = 500 LIF, LI, and sigmoidal neurons are 97%, 79% and 2% respectively. The reservoirs with N = 100 neurons could solve the task with 80%, 68%, and 0.9% respectively.Our study sheds light on the impact of the information representation and memory of the reservoir on the performance of RC systems. The results of our experiments reveal the advantage of using LIF neurons in RC systems for computing spike-trains to solve memory demanding, real-world, spatio-temporal problems. Our findings have applications in engineering nano-electronic RC systems that can be used to solve real-world spatio-temporal problems.
机译:时空信号的实时处理对于感知和解决现实问题至关重要。在大脑中,时空刺激被感觉神经元转换为峰值序列,并投射到皮层和皮层下的神经元进行进一步处理。储层计算(RC)是一种受皮层神经网络(NN)启发的神经计算范式。 。它对时空信号的实时,在线计算很有希望。 RC系统包含一个称为储集器的递归神经网络(RNN),其状态由时空输入序列引起的扰动轨迹改变。经过训练的非递归线性读数层可解释储层随时间变化的动态。回声状态网络(ESN)[1]和液体状态机(LSM)[2]是两种流行且规范的RC系统类型。前者使用非加标的模拟乙状神经元,最近使用渗漏积分(LI)神经元,以及储层中的标准化随机连接矩阵。后者的水库由分布在3-D空间中的泄漏整合与发射(LIF)神经元组成,它们通过概率函数与动态突触连接。模拟神经元与尖峰神经元之间的主要区别是他们的神经元模型动力学及其神经元间的交流机制。但是,RC系统具有一个神秘的共同属性:当油藏动力学经历临界状态[1–6]时,RC系统表现出最佳性能–由油藏的连通性参数|λmax|决定。在ESN中,≈1,在LSM中,λ≈2,而w –在[3-5]中被称为混沌的边缘。在这项研究中,尽管有不同类型的神经元在油藏动力学中施加了差异,但我们有兴趣探索这种共性的可能原因,我们从尖峰和非尖峰油藏的信息表示角度解决了这一问题。我们测量水库状态与时空峰值序列输入之间以及水库与输入,时间平价的线性不可分函数之间的互信息(MI)。此外,我们从MI得出平均累积互信息(MCMI)量,以测量水库中稳定内存的数量及其与时间奇偶校验任务性能的相关性。我们通过执行孤立的语音数字识别和语音数字序列识别任务来补充调查。我们假设就稳定内存对任务性能的影响而言,这两项任务的性能分析将与我们的MI和MCMI结果一致。事实证明,在所有储层类型和所执行的所有任务中,当最大限度地提高了储层中的稳定内存量。同样,在混乱状态下(当网络连接性参数大于临界值时),存储库中缺乏稳定的内存似乎是所有执行的任务的性能下降的明显原因。我们的结果还表明,具有LIF神经元的存储库具有较高的输入稳定存储(由输入存储库MCMI量化),并且在处理时间奇偶校验和语音识别任务方面优于具有模拟S形和LI神经元的存储库。从效率的角度来看,在语音识别任务中,具有100个LIF神经元的水库要优于具有500个LI神经元的水库。乙状结肠储库不能解决该任务。我们针对具有LIF,LI和S型神经元的储层获得的最佳输入储层MCMI和输出储层MCMI分别为4.21、3.79、3.71和2.92、2.51和2.47。在我们孤立的语音识别实验中,N = 500 LIF,L1和S形神经元的水库实现的最大平均性能分别为97%,79%和2%。 N = 100个神经元的水库分别可以解决80%,68%和0.9%的任务。我们的研究揭示了水库的信息表示和存储对RC系统性能的影响。我们的实验结果揭示了在RC系统中使用LIF神经元来计算峰值序列来解决内存需求,真实世界,时空问题的优势。我们的发现在工程纳米电子RC系统中具有应用,可用于解决现实世界中的时空问题。

著录项

  • 作者

    Almassian Amin;

  • 作者单位
  • 年度 2016
  • 总页数
  • 原文格式 PDF
  • 正文语种
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号