首页> 外文期刊>Neurocomputing >A digital hardware implementation of spiking neural networks with binary FORCE training
【24h】

A digital hardware implementation of spiking neural networks with binary FORCE training

机译:二进制力训练尖刺神经网络的数字硬件实现

获取原文
获取原文并翻译 | 示例

摘要

The brain, a network of spiking neurons, can learn complex dynamics by adapting its spontaneous chaotic activity. One of the dominant approaches used to train such a network, the FORCE method, has recently been applied to spiking neural networks. This method employs a pool of randomly connected spiking neurons, called a reservoir, to create chaos and uses the recursive least square (RLS) method to change its dynamic to what is required to follow a teacher signal. Here, we propose a digital hardware architecture for spiking FORCE with some modifications to the original method. First, to reduce the memory usage in hardware implementation, we show that careful binarization of the reservoir weights could pre-serve its initial chaotic activity. Second, we generate the connection matrix on-the-fly instead of storing the whole matrix. Third, a single processor systolic array implementation of the RLS using the inverse QR decomposition is exploited to update the readout layer weights. This implementation is not only more hardware-friendly but also more numerically stable in reduced precision than the standard RLS implementation. Fourth, we implement the design in both single and custom-precision floating-point number systems. Finally, we implement a network of 510 Izhikevic neurons on a Xilinx Artix-7 FPGA with 32, 24, and 18 bits floating-point numbers. To confirm the correctness of our architecture, we successfully train our hardware using three different teacher signals. (c) 2020 Published by Elsevier B.V.
机译:大脑是一种尖刺神经元网络,可以通过调整其自发的混沌活动来学习复杂的动态。最近用于培训这种网络的主要方法之一,该方法最近应用于尖峰神经网络。该方法采用一个被称为储层的随机连接的尖刺神经元池,以创建混乱,并使用递归最小二乘(RLS)方法改变其动态,以便遵循教师信号所需的内容。在这里,我们提出了一种用于尖刺力的数字硬件架构,对原始方法进行一些修改。首先,为了降低硬件实现中的内存使用情况,我们表明储层重量的仔细二值化可以预先提供其初始混沌活动。其次,我们在飞行中生成连接矩阵而不是存储整个矩阵。第三,利用使用逆QR分解的单个处理器Systolic阵列实现,以更新读出层权重。这种实施不仅比标准RLS实现更低的精度更加硬件,而且更具数值稳定。第四,我们在单一和定制精度浮点号系统中实施设计。最后,我们在具有32,24和18位浮点数的Xilinx Artix-7 FPGA上实施510个Izhikevic神经元的网络。要确认我们建筑的正确性,我们使用三个不同的教师信号成功培训我们的硬件。 (c)2020由elsevier b.v发布。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号