...
首页> 外文期刊>Neurocomputing >High-parallelism Inception-like Spiking Neural Networks for Unsupervised Feature Learning
【24h】

High-parallelism Inception-like Spiking Neural Networks for Unsupervised Feature Learning

机译:用于无监督特征学习的高行性初始尖峰神经网络

获取原文
获取原文并翻译 | 示例

摘要

Spiking Neural Networks (SNNs) are brain-inspired, event-driven machine learning algorithms that have been widely recognized in producing ultra-high-energy-efficient hardware. Among existing SNNs, unsupervised SNNs based on synaptic plasticity, especially Spike-Timing-Dependent Plasticity (STDP), are considered to have great potential in imitating the learning process of the biological brain. Nevertheless, the existing STDP-based SNNs have limitations in constrained learning capability and/or slow learning speed. Most STDP-based SNNs adopted a slow-learning Fully-Connected (FC) architectures and used a sub-optimal vote-based scheme for spike decoding. In this paper, we overcome these limitations with: 1) a design of high-parallelism network architecture, inspired by the Inception module in Artificial Neural Networks (ANNs); 2) use of a Vote-for-All (VFA) decoding layer as a replacement to the standard vote-based spike decoding scheme, to reduce the information loss in spike decoding and, 3) a proposed adaptive repolarization (resetting) mechanism that accelerates SNNs & rsquo; learning by enhancing spiking activities. Our experimental results on two established benchmark datasets (MNIST/EMNIST) show that our network architecture resulted in superior performance compared to the widely used FC architecture and a more advanced Locally-Connected (LC) architecture, and that our SNN achieved competitive results with state-of-the-art unsupervised SNNs (95.64%/80.11% accuracy on the MNIST/EMNISE dataset) while having superior learning efficiency and robustness against hardware damage. Our SNN achieved great classification accuracy with only hundreds of training iterations, and random destruction of large numbers of synapses or neurons only led to negligible performance degradation.(c) 2021 Elsevier B.V. All rights reserved.
机译:尖峰神经网络(SNNS)是脑风达的,事件驱动的机器学习算法,这些机器学习算法被广泛认识到生产超高型高型硬件。在现有的SNN中,基于突触塑性的无监督的SNN,特别是尖峰定时依赖性的塑性(STDP)被认为具有巨大潜力在模仿生物脑的学习过程中。然而,现有的基于STDP的SNNS具有限制的学习能力和/或学习速度慢的限制。基于STDP的大多数基于STDP的SNNS采用缓慢学习的完全连接(FC)架构,并使用了用于尖峰解码的基于次最优投票的方案。在本文中,我们克服了以下限制:1)高行网络架构的设计,受到人工神经网络(ANNS)中的初始模块的启发; 2)使用投票 - 所有(VFA)解码层作为替代到基于标准投票的截图解码方案,以减少尖峰解码中的信息丢失,3)加速的提出的自适应复原(重置)机制snns’通过增强尖峰活动学习。我们在两个成熟的基准数据集(Mnist / Emnist)上的实验结果表明,与广泛使用的FC架构和更先进的本地连接(LC)架构相比,我们的网络架构导致了卓越的性能,以及我们的SNN与州的竞争结果达到了竞争力-Af-Art Invevedis SNNS(MNIST / EMNISE数据集的95.64%/ 80.11%),同时具有卓越的学习效率和防止硬件损坏的鲁棒性。我们的SNN实现了巨大的分类准确性,只有数百个训练迭代,大量突触或神经元的随机破坏只会导致忽略不计的性能下降。(c)2021 Elsevier B.v.保留所有权利。

著录项

  • 来源
    《Neurocomputing 》 |2021年第21期| 92-104| 共13页
  • 作者单位

    Sun Yat Sen Univ Sch Elect & Informat Technol Guangzhou Peoples R China|Univ Sydney Sch Comp Sci Biomed & Multimedia Informat Technol Res Grp Sydney NSW Australia;

    Sun Yat Sen Univ Sch Elect & Informat Technol Guangzhou Peoples R China;

    Univ Sydney Sch Comp Sci Biomed & Multimedia Informat Technol Res Grp Sydney NSW Australia;

    Univ Sydney Sch Comp Sci Biomed & Multimedia Informat Technol Res Grp Sydney NSW Australia;

    Sun Yat Sen Univ Sch Elect & Informat Technol Guangzhou Peoples R China;

    Sun Yat Sen Univ Sch Elect & Informat Technol Guangzhou Peoples R China;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Spiking Neural Network (SNN); Unsupervised learning; Inception module; Learning efficiency; Robustness;

    机译:尖峰神经网络(SNN);无人监督的学习;初始模块;学习效率;鲁棒性;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号