首页> 外文期刊>IEEE Transactions on Neural Networks >On-line training of recurrent neural networks with continuous topology adaptation
【24h】

On-line training of recurrent neural networks with continuous topology adaptation

机译:具有连续拓扑适应性的递归神经网络的在线训练

获取原文
获取原文并翻译 | 示例

摘要

This paper presents an online procedure for training dynamic neural networks with input-output recurrences whose topology is continuously adjusted to the complexity of the target system dynamics. This is accomplished by changing the number of the elements of the network hidden layer whenever the existing topology cannot capture the dynamics presented by the new data. The training mechanism is based on the suitably altered extended Kalman filter (EKF) algorithm which is simultaneously used for the network parameter adjustment and for its state estimation. The network consists of a single hidden layer with Gaussian radial basis functions (GRBF), and a linear output layer. The choice of the GRBF is induced by the requirements of the online learning. The latter implies the network architecture which permits only local influence of the new data point in order not to forget the previously learned dynamics. The continuous topology adaptation is implemented in our algorithm to avoid memory and computational problems of using a regular grid of GRBF'S which covers the network input space. Furthermore, we show that the resulting parameter increase can be handled "smoothly" without interfering with the already acquired information. If the target system dynamics are changing over time, we show that a suitable forgetting factor can be used to "unlearn" the no longer-relevant dynamics. The quality of the recurrent network training algorithm is demonstrated on the identification of nonlinear dynamic systems.
机译:本文提出了一种用于训练具有输入输出递归的动态神经网络的在线过程,其拓扑结构不断调整以适应目标系统动力学的复杂性。只要现有拓扑无法捕获新数据提供的动态,就可以通过更改网络隐藏层的元素数量来实现。训练机制基于适当更改的扩展卡尔曼滤波器(EKF)算法,该算法同时用于网络参数调整及其状态估计。该网络由具有高斯径向基函数(GRBF)的单个隐藏层和线性输出层组成。 GRBF的选择取决于在线学习的要求。后者意味着网络架构,该架构仅允许新数据点的本地影响,以免忘记先前学习的动态。在我们的算法中实现了连续拓扑自适应,以避免使用覆盖网络输入空间的常规GRBF'S网格来避免存储和计算问题。此外,我们表明可以在不干扰已经获取的信息的情况下“平稳地”处理所得的参数增加。如果目标系统动力学随时间变化,我们将证明可以使用适当的遗忘因子来“取消学习”不再相关的动力学。在非线性动力学系统的辨识中证明了递归网络训练算法的质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号