首页> 外文会议>IEEE International Conference on Rebooting Computing >Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware
【24h】

Conversion of artificial recurrent neural networks to spiking neural networks for low-power neuromorphic hardware

机译:人工递归神经网络到尖峰神经网络的低功耗神经形态硬件转换

获取原文

摘要

In recent years the field of neuromorphic low-power systems gained significant momentum, spurring brain-inspired hardware systems which operate on principles that are fundamentally different from standard digital computers and thereby consume orders of magnitude less power. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, the efficient processing of temporal sequences or variable length-inputs remains difficult, partly due to challenges in representing and configuring the dynamics of spiking neural networks. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This “train-and-constrain” method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System TrueNorth, a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights to 16 levels, discretize the neural activities to 16 levels, and to limit fan-in to 64 inputs. Surprisingly, we find that short synaptic delays are sufficient to implement the dynamic (temporal) aspect of the RNN in the question classification task. Furthermore we observed that the discretization of the neural activities is beneficial to our train-and-constrain approach. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ≈ 17μW.
机译:近年来,神经形态低功耗系统领域获得了长足发展,刺激了灵感来自大脑的硬件系统,这些硬件系统的运行原理与标准数字计算机完全不同,因此功耗降低了几个数量级。但是,由于缺乏可以利用此类架构优势的算法,仍然阻碍了它们的广泛使用。尽管表示学习算法的神经形态适应正在兴起,但对时间序列或可变长度输入的有效处理仍然很困难,部分原因是在表示和配置尖峰神经网络的动力学方面存在挑战。递归神经网络(RNN)广泛用于机器学习中,以解决各种序列学习任务。在这项工作中,我们提出了一种训练和约束方法,该方法能够在尖峰神经元的基质上映射机器学习(Elman)RNN,同时与当前和不久的将来的神经形态系统的功能兼容。这种“训练和约束”方法包括首先使用时间反向传播训练RNN,然后离散权重,最后通过将人造神经元的响应与峰值神经元的响应进行匹配,将权重转换为峰值RNN。我们通过映射自然语言处理任务(问题分类)来演示我们的方法,其中我们在IBM的Neurosynaptic System TrueNorth(基于尖峰的数字神经形态硬件架构)上演示了网络循环层的整个映射过程。 TrueNorth对连接性,神经和突触参数施加特定的约束。为了满足这些约束,有必要将突触权重离散化为16个级别,将神经活动离散化为16个级别,并将扇入限制为64个输入。令人惊讶的是,我们发现短的突触延迟足以在问题分类任务中实现RNN的动态(时间)方面。此外,我们观察到神经活动的离散化对我们的训练和约束方法是有益的。硬件受限模型在一个TrueNorth芯片上使用少于0.025%的内核时,问题分类的准确率达到了74%,从而估计功耗约为17μW。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号