首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks
【24h】

FSpiNN: An Optimization Framework for Memory-Efficient and Energy-Efficient Spiking Neural Networks

机译:FSPINN:用于记忆高效且节能的尖峰神经网络的优化框架

获取原文
获取原文并翻译 | 示例

摘要

Spiking neural networks (SNNs) are gaining interest due to their event-driven processing which potentially consumes low-power/energy computations in hardware platforms while offering unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule. However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy, thereby making them difficult to be deployed on embedded systems, for instance, on battery-powered mobile devices and IoT Edge nodes. Toward this, we propose FSpiNN, an optimization framework for obtaining memory-efficient and energy-efficient SNNs for training and inference processing, with unsupervised learning capability while maintaining accuracy. It is achieved by: 1) reducing the computational requirements of neuronal and STDP operations; 2) improving the accuracy of STDP-based learning; 3) compressing the SNN through a fixed-point quantization; and 4) incorporating the memory and energy requirements in the optimization process. FSpiNN reduces the computational requirements by reducing the number of neuronal operations, the STDP-based synaptic weight updates, and the STDP complexity. To improve the accuracy of learning, FSpiNN employs timestep-based synaptic weight updates and adaptively determines the STDP potentiation factor and the effective inhibition strength. The experimental results show that as compared to the state-of-the-art work, FSpiNN achieves 7.5x memory saving, and improves the energy efficiency by 3.5x on average for training and by 1.8x on average for inference, across MNIST and Fashion MNIST datasets, with no accuracy loss for a network with 4900 excitatory neurons, thereby enabling energy-efficient SNNs for edge devices/embedded systems.
机译:由于其事件驱动的处理,尖峰神经网络(SNNS)正在获得兴趣,这可能在硬件平台中消耗低功率/能量计算,同时提供由于尖峰定时依赖性塑性(STDP)规则提供无监督的学习能力。然而,最先进的SNNS需要大的内存足迹以实现高精度,从而使它们难以在嵌入式系统上部署,例如,在电池供电的移动设备和IOT边缘节点上。对此,我们提出了FSPInn,一种优化框架,用于获得用于培训和推理处理的记忆有效和节能SNN,在保持准确度的同时,无监督的学习能力。实现:1)降低神经元和STDP操作的计算要求; 2)提高基于STDP的学习的准确性; 3)通过固定点量化压缩SNN; 4)在优化过程中结合内存和能量要求。 FSPINN通过减少神经元操作的数量,基于STDP的突触权重更新和STDP复杂性来降低计算要求。为了提高学习的准确性,FSPINN采用基于时间的突触权重更新,并自适应地确定STDP电压因子和有效的抑制强度。实验结果表明,与最先进的工作相比,FSPINN实现了7.5倍的记忆保存,平均培训平均提高了3.5倍的能量效率,平均推断为1.8倍,跨越MNIST和时尚MNIST数据集,没有具有4900个兴奋神经元的网络的准确性损耗,从而为边缘设备/嵌入式系统提供节能SNN。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号