首页> 外文学位 >Controlling the dynamics of recurrent neural networks with synaptic learning rules.
【24h】

Controlling the dynamics of recurrent neural networks with synaptic learning rules.

机译:用突触学习规则控制递归神经网络的动力学。

获取原文
获取原文并翻译 | 示例

摘要

In the brain, the recurrent architecture of the cortex is of critical importance to both its computational power, as well as to the generation of pathological conditions, such as epilepsy. A fundamental question in neuroscience is how the brain effectively harnesses the computational power of recurrent networks in which activity propagates in an internally generated fashion while creating behaviorally relevant spatiotemporal patterns of activity. I employ two homeostatic learning rules, termed synaptic scaling (SS) and presynaptic-dependent synaptic scaling (PSD), to study network dynamics in response to brief impulse stimuli. I find that the underling mathematical structures of these two rules are different in their transition matrix, which is diagonal as a linear mapping under usual matrix multiplication in the SS, but nondiagonal as a nonlinear mapping under the Hadamard product in the PSD. As a result, SS generates unstable dynamics with runaway excitation, but PSD provides stable dynamics. Then I conduct a systemic study of learning dynamics with biologically realistic neural networks consisted with spiking neurons and kinetic synapses. With more than one stimuli, multiple neural trajectories emerge in a self-organized manner. Using two measures in terms of graph theory, I find PSD generates a functionally feed-forward network when training with a single stimulus, and the complexity of network structure is increased in response to multiple stimuli. In addition, PSD and spike-timing-dependent plasticity (STDP) together improve the ability of the network to incorporate multiple and less variable trajectories. Using continuous neural dynamics, and defining a state vector to describe spike-timing patterns, I study several phenomena of memory dynamics. Finally, I study spontaneous neuron population activities under STDP. With the stimulated global EEG and local field potential, I find both satisfy the hierarchical symmetry across different scales with an exponent characterizing the degree of the balance of excitation and inhibition.
机译:在大脑中,皮质的循环结构对其计算能力以及病理状况(例如癫痫)的产生都至关重要。神经科学中的一个基本问题是大脑如何有效利用循环网络的计算能力,其中活动以内部生成的方式传播,同时创建行为相关的时空活动模式。我采用了两种稳态学习规则,分别称为突触缩放(SS)和突触前依赖性突触缩放(PSD),以研究网络动态响应短暂的脉冲刺激。我发现这两个规则的基础数学结构在它们的过渡矩阵中是不同的,它们在SS中以常规矩阵乘法在线性映射下是对角线,而在PSD的Hadamard乘积下是非线性的非对角线。结果,SS随失控的激励而产生不稳定的动力学,而PSD提供了稳定的动力学。然后,我对生物学动态的神经网络(包括尖峰神经元和动力学突触)组成的学习动力学进行了系统研究。在一个以上的刺激下,多种神经轨迹以自组织的方式出现。使用图论方面的两种方法,我发现PSD在使用单个刺激进行训练时会生成一个功能前馈网络,并且响应多个刺激会增加网络结构的复杂性。此外,PSD和依赖于尖峰时序的可塑性(STDP)一起提高了网络合并多个且变化较小的轨迹的能力。使用连续的神经动力学,并定义状态向量来描述尖峰定时模式,我研究了记忆动力学的几种现象。最后,我研究了STDP下的自发神经元种群活动。通过激发的整体脑电图和局部场电势,我发现两者都满足了不同尺度上的分层对称性,并且具有表征激发和抑制平衡程度的指数。

著录项

  • 作者

    Liu, Jian.;

  • 作者单位

    University of California, Los Angeles.;

  • 授予单位 University of California, Los Angeles.;
  • 学科 Biology Neuroscience.Applied Mathematics.
  • 学位 Ph.D.
  • 年度 2009
  • 页码 147 p.
  • 总页数 147
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号