首页> 外文会议>IEEE International Symposium on Circuits and Systems >A Novel Conversion Method for Spiking Neural Network using Median Quantization
【24h】

A Novel Conversion Method for Spiking Neural Network using Median Quantization

机译:基于中值量化的尖峰神经网络转换新方法

获取原文

摘要

Artificial Neural Networks (ANNs) have achieved great success in the field of computer vision and language understanding. However, it is difficult to deploy these deep learning models on mobile devices because of its massive energy consumption and memory occupation. For another way, highly inspired from biological brain, spiking neural networks (SNNs), are often referred to as the 3-th generation of neural network for its potential superiority in cognitive learning and energy efficiency. Nevertheless, training a deep SNN remains a big challenge. In this paper, we propose a quantized training algorithm for ANNs to minimize spike approximation error, and provide two (temporally or spatially) rate-based conversion methods for SNNs, both of which can be easily mapped to specific neuromorphic platforms. Besides, this novel method can be generalized to various network architectures and adapted to dynamic quantization demand. Experimental results on MNIST and CIFAR-10 dataset demonstrate that the proposed deep spiking neural networks yield the state-of-the-art classification accuracy and need much less operations compared with their ANN counterparts. Our source code will be available upon request for the academic purpose.
机译:人工神经网络(ANN)在计算机视觉和语言理解领域取得了巨大的成功。但是,由于深度学习模型的大量能耗和内存占用,很难在移动设备上部署这些深度学习模型。换一种方式,受到生物大脑高度启发的尖峰神经网络(SNN),由于在认知学习和能源效率方面的潜在优势,通常被称为第三代神经网络。尽管如此,训练深度的SNN仍然是一个巨大的挑战。在本文中,我们提出了一种针对ANN的量化训练算法,以最大程度地降低尖峰近似误差,并为SNN提供了两种(基于时间或空间)基于速率的转换方法,这两种方法都可以轻松地映射到特定的神经形态平台。此外,这种新颖的方法可以推广到各种网络体系结构,并适应动态量化需求。在MNIST和CIFAR-10数据集上的实验结果表明,与ANN同类产品相比,所提出的深度尖峰神经网络具有最新的分类准确性,并且所需的操作要少得多。我们的源代码可应要求用于学术目的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号