首页> 外文期刊>Neural computation >A Theory of Sequence Indexing andWorking Memory in Recurrent Neural Networks
【24h】

A Theory of Sequence Indexing andWorking Memory in Recurrent Neural Networks

机译:递归神经网络中的序列索引和工作记忆理论

获取原文

摘要

To accommodate structured approaches of neural computation, we propose a class of recurrent neural networks for indexing and storing sequences of symbols or analog data vectors. These networks with randomized input weights and orthogonal recurrent weights implement coding principles previously described in vector symbolic architectures (VSA) and leverage properties of reservoir computing. In general, the storage in reservoir computing is lossy, and crosstalk noise limits the retrieval accuracy and information capacity. A novel theory to optimize memory performance in such networks is presented and compared with simulation experiments. The theory describes linear readout of analog data and readout with winner-take-all error correction of symbolic data as proposed in VSA models. We find that diverse VSA models from the literature have universal performance properties, which are superior to what previous analyses predicted. Further, we propose novel VSA models with the statistically optimal Wiener filter in the readout that exhibit much higher information capacity, in particular for storing analog data.nThe theory we present also applies to memory buffers, networks with gradual forgetting, which can operate on infinite data streams without memory overflow. Interestingly, we find that different forgetting mechanisms, such as attenuating recurrent weights or neural nonlinearities, produce very similar behavior if the forgetting time constants are matched. Such models exhibit extensive capacity when their forgetting time constant is optimized for given noise conditions and network size. These results enable the design of new types of VSA models for the online processing of data streams.
机译:为了适应神经计算的结构化方法,我们提出了一类递归神经网络,用于索引和存储符号或模拟数据向量的序列。这些具有随机输入权重和正交递归权重的网络实现了先前在矢量符号体系结构(VSA)中描述的编码原理,并利用了储层计算的属性。通常,储层计算中的存储是有损耗的,并且串扰噪声限制了检索精度和信息容量。提出了一种在这种网络中优化内存性能的新颖理论,并将其与仿真实验进行了比较。该理论描述了模拟数据的线性读出和具有胜利者通吃的符号数据误差校正的读出,如VSA模型中提出的那样。我们发现,文献中的各种VSA模型具有通用的性能,优于先前的分析预测。此外,我们提出了一种新颖的VSA模型,该模型在读数中具有统计上最优的维纳滤波器,具有更高的信息容量,尤其是用于存储模拟数据。数据流没有内存溢出。有趣的是,我们发现如果遗忘时间常数匹配,则不同的遗忘机制(例如递减权重的衰减或神经非线性)会产生非常相似的行为。当针对给定的噪声条件和网络规模优化其遗忘时间常数时,此类模型将展现出强大的功能。这些结果使能够设计用于在线处理数据流的新型VSA模型。

著录项

  • 来源
    《Neural computation》 |2018年第6期|1449-1513|共65页
  • 作者单位

    Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A;

    Department of Computer Science, Electrical and Space Engineering, Lulea University of Technology, Lulea SE-971 87, Sweden;

    Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, U.S.A;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号