首页> 外文期刊>Neural computation >An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification
【24h】

An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification

机译:用于低功耗和快速分类的深度尖峰神经网络的FPGA实现

获取原文
获取原文并翻译 | 示例

摘要

Aspiking neural network (SNN) is a type of biological plausibility model that performs information processing based on spikes. Training a deep SNN effectively is challenging due to the nondifferention of spike signals. Recent advances have shown that high-performance SNNs can be obtained by converting convolutional neural networks (CNNs). However, the large-scale SNNs are poorly served by conventional architectures due to the dynamic nature of spiking neurons. In this letter, we propose a hardware architecture to enable efficient implementation of SNNs. All layers in the network are mapped on one chip so that the computation of different time steps can be done in parallel to reduce latency.We propose new spiking max-pooling method to reduce computation complexity. In addition, we apply approaches based on shift register and coarsely grained parallels to accelerate convolution operation. We also investigate the effect of different encoding methods on SNN accuracy. Finally, we validate the hardware architecture on the Xilinx Zynq ZCU102. The experimental results on the MNIST data set show that it can achieve an accuracy of 98.94% with eight-bit quantized weights. Furthermore, it achieves 164 frames per second (FPS) under 150 MHz clock frequency and obtains 41× speed-up compared to CPU implementation and 22 times lower power than GPU implementation.
机译:尖刺神经网络(SNN)是一种生物似真性模型,可基于尖峰执行信息处理。由于尖峰信号的无差异性,有效地训练深度SNN具有挑战性。最近的进展表明,可以通过转换卷积神经网络(CNN)获得高性能的SNN。但是,由于尖峰神经元的动态特性,传统的体系结构无法很好地服务于大型SNN。在这封信中,我们提出了一种硬件体系结构,以实现SNN的有效实现。网络中的所有层都映射到一个芯片上,因此可以并行执行不同时间步长的计算以减少等待时间。我们提出了一种新的峰值最大池化方法来降低计算复杂性。此外,我们应用基于移位寄存器和粗粒度并行处理的方法来加速卷积操作。我们还研究了不同编码方法对SNN准确性的影响。最后,我们验证Xilinx Zynq ZCU102上的硬件架构。 MNIST数据集上的实验结果表明,使用八位量化的权重,它可以达到98.94%的精度。此外,在150 MHz时钟频率下,它可实现164帧/秒(FPS)的速度,与CPU实施相比,速度提高41倍,功耗比GPU实施低22倍。

著录项

  • 来源
    《Neural computation》 |2020年第1期|182-204|共23页
  • 作者单位

    College of Computer Science Sichuan University Chengdu 610065 China;

    School of Computer Science and Technology Hangzhou Dianzi University Hangzhou 310018 China;

    College of Computer Science and Technology Zhejiang University Hangzhou 310027 China and College of Computer Science Sichuan University Chengdu 610065 China;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号