首页> 外文期刊>Journal of VLSI signal processing systems for signal, image, and video technology >Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware
【24h】

Run-time Mapping of Spiking Neural Networks to Neuromorphic Hardware

机译:尖峰神经网络对神经胸壁硬件的运行时间映射

获取原文
获取原文并翻译 | 示例
           

摘要

Neuromorphic architectures implement biological neurons and synapses to execute machine learning algorithms with spiking neurons and bio-inspired learning algorithms. These architectures are energy efficient and therefore, suitable for cognitive information processing on resource and power-constrained environments, ones where sensor and edge nodes of internet-of-things (IoT) operate. To map a spiking neural network (SNN) to a neuromorphic architecture, prior works have proposed design-time based solutions, where the SNN is first analyzed offline using representative data and then mapped to the hardware to optimize some objective functions such as minimizing spike communication or maximizing resource utilization. In many emerging applications, machine learning models may change based on the input using some online learning rules. In online learning, new connections may form or existing connections may disappear at run-time based on input excitation. Therefore, an already mapped SNN may need to be re-mapped to the neuromorphic hardware to ensure optimal performance. Unfortunately, due to the high computation time, design-time based approaches are not suitable for remapping a machine learning model at run-time after every learning epoch. In this paper, we propose a design methodology to partition and map the neurons and synapses of online learning SNN-based applications to neuromorphic architectures at run-time. Our design methodology operates in two steps - step 1 is a layer-wise greedy approach to partition SNNs into clusters of neurons and synapses incorporating the constraints of the neuromorphic architecture, and step 2 is a hill-climbing optimization algorithm that minimizes the total spikes communicated between clusters, improving energy consumption on the shared interconnect of the architecture. We conduct experiments to evaluate the feasibility of our algorithm using synthetic and realistic SNN-based applications. We demonstrate that our algorithm reduces SNN mapping time by an average 780x compared to a state-of-the-art design-time based SNN partitioning approach with only 6.25% lower solution quality.
机译:神经形态架构实施生物神经元和突触,以将机器学习算法与尖刺神经元和生物启发学习算法执行。这些架构是节能,因此,适用于资源和功率受限环境的认知信息处理,其中传感器和内容的互联网(IOT)操作的位置。为了将尖峰神经网络(SNN)映射到神经形态的架构,先前的作品已经提出了基于设计时间的解决方案,其中SNN首先使用代表数据分析离线,然后映射到硬件以优化一些客观功能,例如最小化尖峰通信。或最大化资源利用率。在许多新兴应用程序中,机器学习模型可能会根据使用某些在线学习规则的输入而改变。在在线学习中,新的连接可以形成或现有连接可以在基于输入激励的运行时消失。因此,可能需要重新映射到神经族硬件以确保最佳性能的已经映射的SNN。遗憾的是,由于高计算时间,基于设计时的方法不适合在每个学习时期之后在运行时重新映射机器学习模型。在本文中,我们提出了一种将在线学习SNN应用的神经元和突触分区和将在线学习的突触突出到运行时的神经形态架构。我们的设计方法在两个步骤中运行 - 步骤1是将SNN分配成神经元和结合神经形式架构的约束的簇的层面贪婪方法,步骤2是爬山山攀爬优化算法,可最大限度地传达的总尖峰在集群之间,提高架构共享互连的能耗。我们使用合成和现实SNN的应用进行实验来评估算法的可行性。我们展示了与基于最先进的设计时的SNN划分方法相比,我们的算法将SNN映射时间降低了平均780倍,其解决方案质量下降了6.25%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号