首页> 外文会议>IEEE International Memory Workshop >Resistive Memories for Spike-Based Neuromorphic Circuits
【24h】

Resistive Memories for Spike-Based Neuromorphic Circuits

机译:基于尖峰的神经形态电路的电阻记忆

获取原文

摘要

In the last decade machine learning algorithms have proven unprecedented performance to solve many real-world detection and classification tasks, for example in image or speech recognition. Despite these advances, there are still some deficits. First, these algorithms require significant memory access thus ruling out an implementation using standard platforms (e.g. GPUs, FPGAs) for embedded applications. Second, most machine leaning algorithms need to be trained with huge data sets (supervised learning). Resistive memories (RRAM) have demonstrated to be a promising candidate to overcome both these constrains. RRAM arrays can act as a dot product accelerator, which is one of the main building blocks in neuromorphic computing systems. This approach could provide improvements in power and speed with respect to the GPU-based networks. Moreover RRAM devices are promising candidates to emulate synaptic plasticity, the capability of synapses to enhance or diminish their connectivity between neurons, which is widely believed to be the basis for learning and memory in the brain. Neural systems exhibit various types and time periods of plasticity, e.g. synaptic modifications can last anywhere from seconds to days or months. In this work we proposed an architecture that implements both Short- and Long-Term Plasticity rules (STP and LTP) using RRAM arrays. We showed the benefits of utilizing both kinds of plasticity with two different applications, visual pattern extraction and decoding of neural signals. LTP allows the neural networks to learn patterns without training data set (unsupervised learning), and STP makes the learning process very robust against environmental noise.
机译:在过去的十年中,机器学习算法被证明具有前所未有的性能,可以解决许多现实世界中的检测和分类任务,例如图像或语音识别。尽管取得了这些进步,但仍然存在一些赤字。首先,这些算法需要大量的内存访问,因此排除了使用标准平台(例如GPU,FPGA)进行嵌入式应用的实现。其次,大多数机器学习算法都需要使用庞大的数据集进行训练(监督学习)。阻性存储器(RRAM)已被证明是克服这两个限制的有前途的候选者。 RRAM阵列可以充当点积加速器,这是神经形态计算系统中的主要构建块之一。相对于基于GPU的网络,此方法可以提高功能和速度。此外,RRAM设备有望成为模仿突触可塑性的有前途的候选者,突触可增强或减弱神经元之间的连接能力,这被广泛认为是大脑学习和记忆的基础。神经系统表现出各种类型和时间段的可塑性,例如突触修饰可持续数秒至数天或数月。在这项工作中,我们提出了一种使用RRAM阵列实现短期和长期可塑性规则(STP和LTP)的体系结构。我们展示了在两种不同的应用程序中利用两种可塑性的好处,即视觉模式提取和神经信号解码。 LTP允许神经网络在不训练数据集的情况下学习模式(无监督学习),而STP使学习过程对环境噪声的抵抗能力非常强。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号