首页> 外文会议>International Joint Conference on Neural Networks >Improving Learning Efficiency of Recurrent Neural Network through Adjusting Weights of All Layers in a Biologically-inspired Framework
【24h】

Improving Learning Efficiency of Recurrent Neural Network through Adjusting Weights of All Layers in a Biologically-inspired Framework

机译:通过调整生物学激发框架中的所有层的重量来提高经常性神经网络的学习效率

获取原文

摘要

Brain-inspired models have become a focus in artificial intelligence field. As a biologically plausible network, the recurrent neural network in reservoir computing framework has been proposed as a popular model of cortical computation because of its complicated dynamics and highly recurrent connections. To train this network, unlike adjusting only readout weights in liquid computing theory or changing only internal recurrent weights, inspired by global modulation of human emotions on cognition and motion control, we introduce a novel reward-modulated Hebbian learning rule to train the network by adjusting not only the internal recurrent weights but also the input connected weights and readout weights together, with solely delayed, phasic rewards. Experiment results show that the proposed method can train a recurrent neural network in near-chaotic regime to complete the motion control and working-memory tasks with higher accuracy and learning efficiency.
机译:脑激发模型已成为人工智能领域的焦点。作为一种生物合理的网络,已经提出了储层计算框架中的经常性神经网络作为一种流行的皮质计算模型,因为其具有复杂的动态和高度复发连接。要培训此网络,与仅在液体计算理论中的读数重量进行调整,或者仅改变内部复发权重,灵感来自全球对人类情绪的认知和运动控制的启发,我们介绍了一种新颖的奖励调制的Hebbian学习规则来通过调整培训网络不仅是内部复发重量,还包括输入连接的重量和读数重量,单独延迟,相位奖励。实验结果表明,该方法可以在近混沌制度中培训经常性的神经网络,以完成更高的准确性和学习效率的运动控制和工作记忆任务。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号