...
首页> 外文期刊>計測自動制御学会論文集 >Incremental learning algorithm for feedforward neural network with long-term memory
【24h】

Incremental learning algorithm for feedforward neural network with long-term memory

机译:具有长期记忆的前馈神经网络增量学习算法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

When neural networks are trained incrementally, input-output relations that are trained formerly tend to be collapsed by the learning of new data. This phenomenon is often called interference. To suppress the interference efficiently, we propose an incremental learning model, in which Long-Term Memory (LTM) is introduced into Resource Allocating Network (RAN) proposed by Platt. This type of memory is utilized for storing useful training data (called LTM data) that are generated adaptively in the learning phase. When a new training datum is given, the proposed system searches several LTM data that are useful for suppressing the interference. The retrieved LTM data as well as the new training datum are trained simultaneously in RAN. In the simulations, the proposed model is applied to various incremental learning problems to evaluate the function approximation accuracy and the learning speed. From the simulation results, we certify that the proposed model can attain good approximation accuracy with small computation costs.
机译:当对神经网络进行增量训练时,以前训练的输入输出关系往往会由于学习新数据而崩溃。这种现象通常称为干扰。为了有效地抑制干扰,我们提出了一种增量学习模型,其中将长期记忆(LTM)引入了Platt提出的资源分配网络(RAN)中。这种类型的存储器用于存储在学习阶段自适应生成的有用的训练数据(称为LTM数据)。当给出一个新的训练数据时,所提出的系统将搜索几个对抑制干扰有用的LTM数据。在RAN中同时训练检索到的LTM数据以及新的训练数据。在仿真中,将所提出的模型应用于各种增量学习问题,以评估函数逼近精度和学习速度。从仿真结果中,我们证明所提出的模型可以以较小的计算成本获得良好的近似精度。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号