首页> 外文OA文献 >Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware
【2h】

Parallel simulation of neural networks on SpiNNaker universal neuromorphic hardware

机译:spiNNaker通用神经形态硬件神经网络的并行仿真

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Artificial neural networks have shown great potential and have attracted much research interest. One problem faced when simulating such networks is speed. As the number of neurons increases, the time to simulate and train a network increases dramatically. This makes it difficult to simulate and train a large-scale network system without the support of a high-performance computer system. The solution we present is a "real" parallel system - using a parallel machine to simulate neural networks which are intrinsically parallel applications. SpiNNaker is a scalable massively-parallel computing system under development with the aim of building a general-purpose platform for the parallel simulation of large-scale neural systems. This research investigates how to model large-scale neural networks efficiently on such a parallel machine. While providing increased overall computational power, a parallel architecture introduces a new problem - the increased communication reduces the speedup gains. Modeling schemes, which take into account communication, processing, and storage requirements, are investigated to solve this problem. Since modeling schemes are application-dependent, two different types of neural network are examined - spiking neural networks with spike-time dependent plasticity, and the parallel distributed processing model with the backpropagation learning rule. Different modeling schemes are developed and evaluated for the two types of neural network. The research shows the feasibility of the approach as well as the performance of SpiNNaker as a general-purpose platform for the simulation of neural networks. The linear scalability shown in this architecture provides a path to the further development of parallel solutions for the simulation of extremely large-scale neural networks.
机译:人工神经网络已显示出巨大的潜力,并吸引了许多研究兴趣。模拟这样的网络时面临的一个问题是速度。随着神经元数量的增加,模拟和训练网络的时间急剧增加。如果没有高性能计算机系统的支持,则很难模拟和训练大型网络系统。我们提出的解决方案是一个“真实的”并行系统-使用并行机来模拟本质上是并行应用程序的神经网络。 SpiNNaker是正在开发的可扩展大规模并行计算系统,旨在为大型神经系统的并行仿真构建通用平台。这项研究研究了如何在这样的并行机器上有效地对大型神经网络建模。在提供增加的总体计算能力的同时,并行体系结构引入了一个新问题-通信量的增加降低了加速增益。为了解决此问题,研究了考虑通信,处理和存储要求的建模方案。由于建模方案取决于应用程序,因此研究了两种不同类型的神经网络-具有与峰值时间相关的可塑性的尖峰神经网络,以及具有反向传播学习规则的并行分布式处理模型。针对两种类型的神经网络,开发并评估了不同的建模方案。研究表明该方法的可行性以及SpiNNaker作为用于神经网络仿真的通用平台的性能。此体系结构中显示的线性可伸缩性为进一步开发并行解决方案提供了一条途径,用于模拟超大规模神经网络。

著录项

  • 作者

    Furber Stephen; Jin Xin;

  • 作者单位
  • 年度 2010
  • 总页数
  • 原文格式 PDF
  • 正文语种 English
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号