首页> 外文期刊>IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems >STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition
【24h】

STDP-Based Pruning of Connections and Weight Quantization in Spiking Neural Networks for Energy-Efficient Recognition

机译:基于STDP基于连接的修剪和尖峰神经网络中的重量量化,以节能识别

获取原文
获取原文并翻译 | 示例

摘要

Spiking neural networks (SNNs) with a large number of weights and varied weight distribution can be difficult to implement in emerging in-memory computing hardware due to the limitations on crossbar size (implementing dot product), the constrained number of conductance states in non-CMOS devices and the power budget. We present a sparse SNN topology where noncritical connections are pruned to reduce the network size, and the remaining critical synapses are weight quantized to accommodate for limited conductance states. Pruning is based on the power law weight-dependent spike timing dependent plasticity model; synapses between pre- and post-neuron with high spike correlation are retained, whereas synapses with low correlation or uncorrelated spiking activity are pruned. The weights of the retained connections are quantized to the available number of conductance states. The process of pruning noncritical connections and quantizing the weights of critical synapses is performed at regular intervals during training. We evaluated our sparse and quantized network on MNIST dataset and on a subset of images from Caltech-101 dataset. The compressed topology achieved a classification accuracy of 90.1% (91.6%) on the MNIST (Caltech-101) dataset with 3.1X (2.2X) and 4X (2.6X) improvement in energy and area, respectively. The compressed topology is energy and area efficient while maintaining the same classification accuracy of a 2-layer fully connected SNN topology.
机译:由于横杆尺寸的限制(实现点产品)的限制,在新出现的内存计算硬件中,尖峰神经网络(SNNS)具有大量重量和变化的重量分布可能难以实现。 CMOS设备和电源预算。我们提出了一种稀疏的SNN拓扑,其中修剪非临界连接以降低网络尺寸,并且剩余的关键突触是量化以适应有限的电导状态的权重。修剪基于电力法依赖于电力依赖性峰值定时依赖性塑性模型;保留具有高尖峰相关性的神经元和神经元前神经元之间的突触,而剪切具有低相关或不相关的尖峰活动的突触。保持连接的重量被量化为可用的电导状态。在训练期间定期进行修剪非临界连接和量化关键突触重量的过程。我们在Mnist DataSet上评估了我们的稀疏和量化网络以及CALTECH-101 DataSet的图像子集。压缩拓扑结构分别在MNIST(CALTECH-101)数据集上实现了90.1%(91.6%)的分类精度,分别具有3.1倍(2.2倍)和4倍(2.6倍)的能量和面积改善。压缩拓扑是能量和面积的高效,同时保持2层完全连接的SNN拓扑的相同分类精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号