首页> 外文期刊>Neurocomputing >Benchmarking the performance of neuromorphic and spiking neural network simulators
【24h】

Benchmarking the performance of neuromorphic and spiking neural network simulators

机译:基于神经形态和尖峰神经网络模拟器的性能

获取原文
获取原文并翻译 | 示例

摘要

Software simulators play a critical role in the development of new algorithms and system architectures in any field of engineering. Neuromorphic computing, which has shown potential in building brain-inspired energy-efficient hardware, suffers a slow-down in the development cycle due to a lack of flexible and easy-to-use simulators of either neuromorphic hardware itself or of spiking neural networks (SNNs), the type of neural network computation executed on most neuromorphic systems. While there are several openly available neuromorphic or SNN simulation packages developed by a variety of research groups, they have mostly targeted computational neuroscience simulations, and only a few have targeted small-scale machine learning tasks with SNNs. Evaluations or comparisons of these simulators have often targeted computational neuroscience-style workloads. In this work, we seek to evaluate the performance of several publicly available SNN simulators with respect to non-computational neuroscience workloads, in terms of speed, flexibility, and scalability. We evaluate the performance of the NEST, Brian2, Brian2GeNN, BindsNET and Nengo packages under a common front-end neuromorphic framework. Our evaluation tasks include a variety of different network architectures and workload types to mimic the computation common in different algorithms, including feed-forward network inference, genetic algorithms, and reservoir computing. We also study the scalability of each of these simulators when running on different computing hardware, from single core CPU workstations to multi-node supercomputers. Our results show that the BindsNET simulator has the best speed and scalability for most of the SNN workloads (sparse, dense, and layered SNN architectures) on a single core CPU. However, when comparing the simulators leveraging the GPU capabilities, Brian2GeNN outperforms the others for these workloads in terms of scalability. NEST performs the best for small sparse networks and is also the most flexible simulator in terms of reconfiguration capability NEST shows a speedup of at least 2x compared to the other packages when running evolutionary algorithms for SNNs. The multi-node and multi-thread capabilities of NEST show at least 2x speedup compared to the rest of the simulators (single core CPU or GPU based simulators) for large and sparse networks. We conclude our work by providing a set of recommendations on the suitability of employing these simulators for different tasks and scales of operations. We also present the characteristics for a future generic ideal SNN simulator for different neuromorphic computing workloads.(c) 2021 Elsevier B.V. All rights reserved.
机译:软件模拟器在工程的任何领域发挥新的算法和系统架构发展的关键作用。神经形态计算,其在建立脑启发节能的硬件显示的潜力,由于缺乏任何神经形态硬件本身的灵活和易于使用的模拟或脉冲神经网络(遭受减慢开发周期SNNS),在大多数神经运动系统执行的神经网络计算的类型。虽然有通过各种研究小组开发了一些公开可用的神经运动或SNN仿真包,他们大多有针对性的计算神经科学模拟,只有少数有针对性的小型机器学习与SNNS任务。评估或这些模拟器的比较往往针对性计算神经科学式的工作负载。在这项工作中,我们寻求相对于非计算神经科学的工作负载来评估几个公开可用的SNN模拟器的性能,速度,灵活性和可扩展性方面。我们评估NEST,Brian2,Brian2GeNN,BindsNET和Nengo封装在一个共同的前端神经形态架构的性能。我们的评估任务包括各种不同的算法,包括前馈网络推断,遗传算法,和储层计算不同的网络结构和工作负载类型,以模拟计算共同的。在不同的计算硬件上运行时,从单核CPU工作站到多节点的超级我们还研究这些模拟器的可扩展性。我们的研究结果表明,BindsNET模拟器有大部分SNN工作负载的最佳速度和可扩展性(疏,密,和分层SNN架构)单核CPU上。然而,比较模拟器撬动的GPU功能时,Brian2GeNN优于他人的这些工作负载的可扩展性方面。 NEST执行最适合小稀疏网络和也是在重新配置能力NEST示出了用于运行SNNS进化算法时相比,其他的包的至少2倍的加速方面最灵活模拟器。的NEST显示多节点和多线程功能的至少2倍的加速比模拟器的其余部分(单一的基于核心CPU或GPU模拟器)大型和稀疏的网络。我们通过采用这些模拟器不同的任务和操作尺度的适宜性提供了一系列建议,完成我们的工作。我们还提出了特性为未来的通用理想SNN模拟器不同的神经形态计算工作负载。版权所有(C)2021爱思唯尔B.V.所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号