首页> 美国卫生研究院文献>other >An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
【2h】

An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

机译:多层尖峰神经网络的一种有效的有监督训练算法。

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.
机译:尖峰神经网络(SNN)是第三代神经网络,在诸如模式识别之类的认知任务中表现出色。在生物认知系统中发现的尖峰发射和信息处理机制促使分层结构和时间编码机制在尖峰神经网络中的应用,这些神经网络已经展现出强大的计算能力。然而,分层结构和时间编码方法要求神经元分别在空间和时间上连续地处理信息,这大大降低了训练效率。为了训练分层SNN,大多数现有方法都基于传统的反向传播算法,继承了它的梯度扩散和参数敏感性的缺点。为了保持层次结构和时间编码机制的强大计算能力,但又克服了现有算法效率低的缺点,提出了一种新的训练算法,即归一化尖峰误差反向传播算法。在前馈计算中,通过解决尖峰响应模型中的二次函数来计算输出尖峰时间,而不是在传统算法中的所有时间点都检测突触后电压状态。此外,在反馈权重修改中,计算误差通过突触前尖峰抖动而不是梯度体面规则传播到前一层,从而实现了分层训练。此外,我们的算法研究了权重变化与电压误差变化之间的数学关系,这使得权重修改中的归一化适用。采用这些策略,我们的算法在学习效率和参数敏感性方面都优于传统的SNN多层算法,本文的综合实验结果也证明了这一点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号