...
首页> 外文期刊>LIPIcs : Leibniz International Proceedings in Informatics >Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks
【24h】

Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks

机译:尖峰神经网络中的随机素描,聚类和短期内存

获取原文

摘要

We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (implemented via winner-take-all competition) can reduce well-separated high-dimensional input vectors to well-separated low-dimensional vectors. By augmenting our network with a third module, we can efficiently map each input (along with any small perturbations of the input) to a unique representative neuron, solving a neural clustering problem. Both the size of our network and its processing time, i.e., the time it takes the network to compute the compressed output given a presented input, are independent of the (potentially large) dimension of the input patterns and depend only on the number of distinct inputs that the network must encode and the pairwise relative Hamming distance between these inputs. The first two steps of our construction mirror known biological networks, for example, in the fruit fly olfactory system [Caron et al., 2013; Lin et al., 2014; Dasgupta et al., 2017]. Our analysis helps provide a theoretical understanding of these networks and lay a foundation for how random compression and input memorization may be implemented in biological neural networks. Technically, a contribution in our network design is the implementation of a short-term memory. Our network can be given a desired memory time t_m as an input parameter and satisfies the following with high probability: any pattern presented several times within a time window of t_m rounds will be mapped to a single representative output neuron. However, a pattern not presented for ca1 will be "forgotten", and its representative output neuron will be released, to accommodate newly introduced patterns.
机译:我们在神经计算模型中研究输入压缩。我们证明,由随机投影步骤(通过随机突触连接实现)组成的网络,然后是稀疏步骤(通过获奖者 - 所有竞争实现)可以将分离良好的高维输入向量降低到分离的低维vectors。通过使用第三个模块增强我们的网络,我们可以将每个输入(以及输入的任何小扰动)有效地将每个输入映射到一个独特的代表性神经元,解决神经聚类问题。我们的网络的大小和其处理时间,即,在给定呈现的输入所需的压缩输出需要网络的处理时间,与输入模式的(潜在大的)尺寸无关,并且仅取决于不同的数量输入网络必须编码的输入以及这些输入之间的成对相对汉明距离。例如,我们的建筑镜镜的前两个步骤,例如,在果蝇嗅觉系统中[Caron等,2013;林等,2014年; dasgupta等人。,2017]。我们的分析有助于提供对这些网络的理论理解,并为在生物神经网络中实现随机压缩和输入记忆的基础。从技术上讲,我们网络设计中的贡献是实现短期内存的实现。我们的网络可以被赋予所需的存储时间T_M作为输入参数,满足以下概率的以下内容:在T_M轮的时间窗口内呈现多次的任何模式将被映射到单个代表输出神经元。然而,未提出CA1的模式将“被遗忘”,其代表性输出神经元将被释放,以适应新引入的图案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号