首页> 美国卫生研究院文献>Frontiers in Neuroinformatics >Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
【2h】

Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability

机译:分布式尖峰神经网络仿真中的通信稀疏性以提高可伸缩性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.
机译:在过去的十年中,有许多大型科学项目希望通过使用Spiking Neuronal Network(SNN)模拟来帮助发现和实验来全面了解大脑的功能。这种方法增加了对SNN模拟器的计算需求:如果要实现自然规模的大脑大小的模拟,则必须使用并行和分布式计算模型。通信被认为是分布式SNN模拟的主要部分。随着计算节点数量的增加,模拟在有用计算中花费的时间比例(计算效率)降低,因此对可伸缩性施加了限制。这项工作针对通信的三个阶段,以提高分布式仿真中的整体计算效率:隐式同步,过程握手和数据交换。通过将SNN建模为超图,我们引入了感知连接性的神经元分配给计算节点。对超图进行分区以减少进程间的通信会增加通信图的稀疏性。我们提出了动态稀疏交换,作为对稀疏通信上简单的点对点交换的改进。结果显示,在使用基于超图的分配和动态稀疏通信时,组合增益提高了,计算效率提高了40.8个百分点,仿真时间减少了73%。这些发现可应用于将通信建模为图形网络的其他分布式复杂系统仿真。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号