首页> 外文期刊>Frontiers in Computational Neuroscience >A Curiosity-Based Learning Method for Spiking Neural Networks
【24h】

A Curiosity-Based Learning Method for Spiking Neural Networks

机译:一种基于好奇心的神经网络学习方法

获取原文
           

摘要

Spiking Neural Networks (SNNs) have shown favorable performance recently. Nonetheless, the time-consuming computation on neuron level and complex optimization limit their real-time application. Curiosity has shown great performance in brain learning, which helps biological brains grasp new knowledge efficiently and actively. Inspired by this leaning mechanism, we propose a curiosity-based SNN (CBSNN) model, which contains four main learning processes. Firstly, the network is trained with biologically plausible plasticity principles to get the novelty estimations of all samples in only one epoch; secondly, the CBSNN begins to repeatedly learn the samples whose novelty estimations exceed the novelty threshold and dynamically update the novelty estimations of samples according to the learning results in five epochs; thirdly, in order to avoid the overfitting of the novel samples and forgetting of the learned samples, CBSNN retrains all samples in one epoch; finally, step two and step three are periodically taken until network convergence. Compared with the state-of-the-art Voltage-driven Plasticity-centric SNN (VPSNN) under standard architecture, our model achieves a higher accuracy of 98.55% with only 54.95% of its computation cost on the MNIST hand-written digit recognition dataset. Similar conclusion can also be found out in other datasets, i.e., Iris, NETtalk, Fashion-MNIST, and CIFAR-10, respectively. More experiments and analysis further prove that such curiosity-based learning theory is helpful in improving the efficiency of SNNs. As far as we know, this is the first practical combination of the curiosity mechanism and SNN, and these improvements will make the realistic application of SNNs possible on more specific tasks within the von Neumann framework.
机译:尖峰神经网络(SNNS)最近表现出良好的性能。尽管如此,对神经元级别和复杂优化的耗时计算限制了它们的实时应用。好奇心在大脑学习中表现出良好的性能,这有助于生物大脑有效地掌握新知识。灵感来自这种倾斜机制,我们提出了一种基于好奇的SNN(CBSNN)模型,其包含四个主要学习过程。首先,网络接受了生物学卓越的可塑性原理培训,以获得仅在一个时代的所有样品的新奇估计;其次,CBSNN开始重复学习其新颖性估计超过新颖性阈值的样本,并根据五个时期的学习结果动态更新样本的新奇估计;第三,为了避免新颖的样本和遗忘所学知的样品的过度舒服,CBSNN在一个时代中检测所有样品;最后,周期性地采用步骤二和步骤三,直到网络收敛。与标准架构下的最先进的电压驱动可塑性的SNN(VPSNN)相比,我们的模型达到了98.55%的更高精度,只有54.95%的计算成本在MNIST手写的数字识别数据集。类似的结论也可以分别在其他数据集,即虹膜,NetTalk,时尚 - Mnist和CiFar-10中找到。更多的实验和分析进一步证明了这种基于好奇心的学习理论有助于提高SNN的效率。据我们所知,这是好奇心机制和SNN的第一个实际结合,这些改进将使SNNS的现实应用成为冯Neumann框架内的更具体的任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号