首页> 外文会议>ACM/IEEE Design Automation Conference >T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding
【24h】

T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding

机译:T2FSNN:具有首次尖峰时间编码的深度尖峰神经网络

获取原文

摘要

Spiking neural networks (SNNs) have gained considerable interest due to their energy-efficient characteristics, yet lack of a scalable training algorithm has restricted their applicability in practical machine learning problems. The deep neural network-to-SNN conversion approach has been widely studied to broaden the applicability of SNNs. Most previous studies, however, have not fully utilized spatio-temporal aspects of SNNs, which has led to inefficiency in terms of number of spikes and inference latency. In this paper, we present T2FSNN, which introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the aforementioned drawback. In addition, we propose gradient-based optimization and early firing methods to further increase the efficiency of the T2FSNN. According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding, which is the state-of-the-art result on the CIFAR-100.
机译:尖峰神经网络(SNN)由于具有高能效特性而引起了人们的极大兴趣,但是缺乏可扩展的训练算法却限制了它们在实际机器学习问题中的适用性。深度神经网络到SNN的转换方法已得到广泛研究,以扩大SNN的适用性。但是,大多数先前的研究并未充分利用SNN的时空方面,这导致了尖峰次数和推理延迟方面的效率低下。在本文中,我们提出了T2FSNN,它使用基于内核的动态阈值和枝晶技术将时间到第一次尖峰编码的概念引入了深度SNN中,从而克服了上述缺点。此外,我们提出了基于梯度的优化和早期触发方法,以进一步提高T2FSNN的效率。根据我们的结果,与CIFAR-100的最新结果相比,所提出的方法与突发编码相比,可以将推理延迟和尖峰数量减少到22%且小于1%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号