首页> 美国卫生研究院文献>other >Synergies between Intrinsic and Synaptic Plasticity Based on Information Theoretic Learning
【2h】

Synergies between Intrinsic and Synaptic Plasticity Based on Information Theoretic Learning

机译:基于信息理论学习的内在和突触可塑性之间的协同作用

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In experimental and theoretical neuroscience, synaptic plasticity has dominated the area of neural plasticity for a very long time. Recently, neuronal intrinsic plasticity (IP) has become a hot topic in this area. IP is sometimes thought to be an information-maximization mechanism. However, it is still unclear how IP affects the performance of artificial neural networks in supervised learning applications. From an information-theoretical perspective, the error-entropy minimization (MEE) algorithm has newly been proposed as an efficient training method. In this study, we propose a synergistic learning algorithm combining the MEE algorithm as the synaptic plasticity rule and an information-maximization algorithm as the intrinsic plasticity rule. We consider both feedforward and recurrent neural networks and study the interactions between intrinsic and synaptic plasticity. Simulations indicate that the intrinsic plasticity rule can improve the performance of artificial neural networks trained by the MEE algorithm.
机译:在实验和理论神经科学中,突触可塑性长期以来一直主导着神经可塑性领域。最近,神经元固有可塑性(IP)已成为该领域的热门话题。知识产权有时被认为是信息最大化的机制。但是,目前尚不清楚IP在监督学习应用中如何影响人工神经网络的性能。从信息理论的角度来看,新近提出了一种误差熵最小化(MEE)算法作为一种有效的训练方法。在这项研究中,我们提出了一种协同学习算法,该算法将MEE算法作为突触可塑性规则,并将信息最大化算法作为固有可塑性规则。我们考虑前馈和递归神经网络,并研究内在和突触可塑性之间的相互作用。仿真表明,内在可塑性规则可以提高由MEE算法训练的人工神经网络的性能。

著录项

  • 期刊名称 other
  • 作者

    Yuke Li; Chunguang Li;

  • 作者单位
  • 年(卷),期 -1(8),5
  • 年度 -1
  • 页码 e62894
  • 总页数 17
  • 原文格式 PDF
  • 正文语种
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号