首页> 外文期刊>Neural computation >Hebbian Learning of Recurrent Connections: A Geometrical Perspective
【24h】

Hebbian Learning of Recurrent Connections: A Geometrical Perspective

机译:希伯来语的循环连接学习:几何角度

获取原文
获取原文并翻译 | 示例

摘要

We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix onto a Euclidean distance matrix in a high-dimensional space, with the neurons labeled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the nonconvolutional part of the connectivity, which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex via the self-organization of recurrent rather than feedforward connections. In addition, we establish that the nonconvolutional (or long-range) connectivity is patchy and is co-aligned in the case of orientation learning.
机译:我们展示了具有可修改的递归连接且经过缓慢的Hebbian学习的Hopfield网络如何能够提取输入空间的基础几何。首先,我们使用缓慢和快速的分析来得出一个平均系统,其动力学是从能量函数中得出的,因此总是收敛到平衡点。均衡反映了输​​入的相关结构,即仅通过局部循环交互提取的全局对象。其次,我们使用数值方法来说明学习如何提取输入的隐藏几何结构。实际上,多维缩放方法可以将最终的连通性矩阵投影到高维空间中的欧几里得距离矩阵上,而神经元则由该空间中的空间位置标记。最终的网络结构证明是大致卷积的。投影的残差定义了连接的非卷积部分,在此过程中将其最小化。最后,我们展示了如何限制神经元生活空间的尺寸,从而产生类似于皮层图的模式。我们使用基于电线长度最小化的能效论证来激励这一点。最后,我们展示了这种方法如何通过循环的自组织而不是前馈连接导致初级视觉皮层出现眼优势或定向列。此外,我们确定非卷积(或远程)连接是零散的,并且在定向学习的情况下是一致的。

著录项

  • 来源
    《Neural computation》 |2012年第9期|p.2346-2383|共38页
  • 作者单位

    NeuroMathComp Project Team, INRIA Sophia-Antipolis Mediterranee, 06902 Sophia Antipolis, France;

    NeuroMathComp Project Team, INRIA Sophia-Antipolis Mediterranee, 06902 Sophia Antipolis, France;

    Department of Mathematics, University of Utah, Salt Lake City, UT 84112, U.S.A., and Mathematical Institute, University of Oxford, Oxford OX1 3LB, U.K.;

  • 收录信息 美国《科学引文索引》(SCI);美国《化学文摘》(CA);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号