首页> 外文OA文献 >A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality
【2h】

A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality

机译:一种皮质稀疏分布式编码模型,连接迷你和   宏观规模的功能

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

No generic function for the minicolumn, i.e., one that would apply equallywell to all cortical areas and species, has yet been proposed. I propose thatthe minicolumn does have a generic functionality, which only becomes clear whenseen in the context of the function of the higher-level, subsuming unit, themacrocolumn. I propose that: a) a macrocolumn's function is to store sparsedistributed representations of its inputs and to be a recognizer of thoseinputs; and b) the generic function of the minicolumn is to enforcemacrocolumnar code sparseness. The minicolumn, defined here as a physicallylocalized pool of ~20 L2/3 pyramidals, does this by acting as a winner-take-all(WTA) competitive module, implying that macrocolumnar codes consist of ~70active L2/3 cells, assuming ~70 minicolumns per macrocolumn. I describe analgorithm for activating these codes during both learning and retrievals, whichcauses more similar inputs to map to more highly intersecting codes, a propertywhich yields ultra-fast (immediate, first-shot) storage and retrieval. Thealgorithm achieves this by adding an amount of randomness (noise) into the codeselection process, which is inversely proportional to an input's familiarity. Ipropose a possible mapping of the algorithm onto cortical circuitry, and adduceevidence for a neuromodulatory implementation of this familiarity-contingentnoise mechanism. The model is distinguished from other recent columnar corticalcircuit models in proposing a generic minicolumnar function in which a group ofcells within the minicolumn, the L2/3 pyramidals, compete (WTA) to be part ofthe sparse distributed macrocolumnar code.
机译:尚未提出微柱的通用功能,即,一种适用于所有皮层区域和物种的功能。我建议迷你列确实具有通用功能,只有在上级包含单元宏列的功能范围内才能看到。我建议:a)宏列的功能是存储其输入的稀疏分布表示并成为这些输入的识别器; b)最小列的通用功能是强制宏列代码稀疏。微型柱在这里定义为约20个L2 / 3金字塔形的物理局部池,通过充当赢家通吃(WTA)竞争模块来完成此操作,这意味着大柱形代码由约70个活动L2 / 3细胞组成,假设约70个每个宏列的最小列数。我描述了在学习和检索过程中激活这些代码的算法,这导致更多相似的输入映射到更高相交的代码,该属性可产生超快速(立即,初次)存储和检索。算法通过在代码选择过程中增加一定数量的随机性(噪声)来实现这一目标,该随机性与输入的熟悉程度成反比。我提出了将该算法可能映射到皮层电路的建议,并为这种熟悉-持续性噪声机制的神经调节实现提供了证据。该模型与其他最近的柱状皮质电路模型的区别在于,它提出了一个通用的微型柱函数,其中微型柱中的一组细胞L2 / 3金字塔形竞争(WTA)成为稀疏分布式宏柱代码的一部分。

著录项

  • 作者

    Rinkus, Gerard J.;

  • 作者单位
  • 年度 2017
  • 总页数
  • 原文格式 PDF
  • 正文语种
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号