首页> 美国卫生研究院文献>Frontiers in Neuroanatomy >Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
【2h】

Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware

机译:基于神经形态硬件的塑性神经网络的大规模仿真

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
机译:SpiNNaker是一种数字化,神经形态的体系结构,旨在以接近生物实时的速度模拟大规模的尖峰神经网络。 SpiNNaker系统的基本计算单元不是使用定制的模拟或数字硬件,而是通用的ARM处理器,可以对其进行编程以模拟各种各样的神经元和突触模型。这种灵活性在研究生物可塑性现象时特别有价值。最近提出的基于贝叶斯置信度传播神经网络(BCPNN)范式的学习规则为使用尖峰神经元建模不同可塑性机制之间的相互作用提供了通用框架。但是,使用BCPNN学习来模拟大型网络可能在计算上昂贵,因为每个突触都需要多个状态变量,每个状态变量都需要在每个模拟时间步长进行更新。我们将基于对BCPNN方程的解析解决方案,讨论在为SpiNNaker开发基于事件的BCPNN实现时涉及的效率和准确性之间的取舍,并详细说明为在SpiNNaker架构的有限计算和内存资源内使其适应而采取的步骤。我们通过学习递归吸引网络中神经活动的时间序列来证明这一学习规则,我们在最大2.0×104个神经元和5.1×107个塑料突触的尺度上进行模拟:这是有史以来最大的可在神经形态硬件上进行模拟的塑性神经网络。我们还在Cray XC-30超级计算机系统上进行了可比较的仿真,发现如果要与我们的SpiNNaker仿真的运行时间匹配,那么超级计算机系统将使用大约45倍的功耗。这表明,更便宜,更省电的神经形态系统已成为研究大规模脑模型可塑性的有用发现工具。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号