...
首页> 外文期刊>Journal of vision >An alternative to explicit divisive normalization models
【24h】

An alternative to explicit divisive normalization models

机译:显式除法归一化模型的替代方法

获取原文
           

摘要

Probabilistic inference lies at the heart of many crucial brain processes, such as primary visual processing, attentional modulation, multi-sensory integration, reference frame transformations, decision making, etc. It is possible that inference is implemented by marginalization across variables through explicit divisive normalization. However, direct evidence for such processes in the brain is sparse and further, for all but the simplest distributions, explicit marginalization requires intractable normalization operations. Here, we argue that explicit divisive normalization is not the only way marginalization can be performed and we propose an alternative, physiologically more realistic mechanism. This alternative mechanism (implicit approximate normalization: IAN) is based on well-established parallel computing and machine learning principles and is functionally equivalent to divisive normalization without requiring intractable sums/integrals. Specifically, we implemented multi-layer feed-forward neural networks and trained them to carry out several tasks using a pseudo-Newton method with preconditioned conjugate gradient descent. Doing so, we explicitly modelled near optimal multi-sensory integration, reference frame transformations and both in combination. We did so using different neural coding schemes within the same network, i.e. probabilistic spatial codes and probabilistic joint codes. We also implemented comparable spiking networks with realistic synaptic dynamics, demonstrating the feasibility of IAN at the spiking neuron level. Our networks produce a wide range of behaviours, similar to observations of real neurons in the brain. These include inverse effectiveness, the spatial correspondence principle, super-additivity, gain-like modulations and multi-sensory suppression. One advantage of IAN is that it works regardless of the coding scheme used in individual neurons, while divisive normalization requires explicitly matching population codes. In addition, IAN does not need a neatly organized and regular connectivity structure between contributing neurons, such as required by divisive normalization. Overall, our study demonstrates that marginalizing operations can be carried out in simple networks of purely additive neurons without explicit divisive normalization.
机译:概率推理是许多关键大脑过程的核心,例如初级视觉处理,注意调节,多感觉整合,参考系变换,决策制定等。通过明确的除数归一化跨变量进行边际化来实现推理是可能的。 。但是,大脑中此类过程的直接证据很少,而且,除了最简单的分布之外,对于所有其他情况,显式边缘化都需要进行棘手的标准化操作。在这里,我们认为显式的除数归一化不是执行边缘化的唯一方法,并且我们提出了另一种在生理上更现实的机制。这种替代机制(隐式近似归一化:IAN)基于完善的并行计算和机器学习原理,并且在功能上等同于除数归一化,而无需难解的和/积分。具体来说,我们实现了多层前馈神经网络,并对其进行了训练,以使用带有预共轭梯度下降的伪牛顿法来执行多项任务。这样做,我们显式地建模了接近最佳的多感官集成,参考系转换以及两者的组合。我们在同一网络中使用了不同的神经编码方案,即概率空间码和概率联合码。我们还实现了具有逼真的突触动力学的可比的尖峰网络,证明了IAN在尖峰神经元水平上的可行性。我们的网络产生各种各样的行为,类似于对大脑中真实神经元的观察。这些包括逆有效性,空间对应原理,超可加性,类增益调制和多感觉抑制。 IAN的优点之一是,不管单个神经元使用哪种编码方案,它都能工作,而除法归一化则需要明确匹配总体代码。另外,IAN不需要像贡献除数归一化所需要的那样,在贡献的神经元之间有一个整齐有序的规则连接结构。总的来说,我们的研究表明,边缘化操作可以在纯加性神经元的简单网络中进行,而无需明确的除法归一化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号