首页> 外文期刊>Neural Network World >RECURRENT NEURAL NETWORK TRAINING WITH THE KALMAN FILTER-BASED TECHNIQUES
【24h】

RECURRENT NEURAL NETWORK TRAINING WITH THE KALMAN FILTER-BASED TECHNIQUES

机译:基于卡尔曼滤波器的递归神经网络训练

获取原文
获取原文并翻译 | 示例

摘要

Recurrent neural networks, in contrast to the classical feedforward neural networks, handle better the inputs that have space-time structure, e.g. symbolic time series. Since the classic gradient methods for recurrent neural network training on longer input sequences converge very poorly and slowly, alternative approaches are needed. We describe the training method with the Extended Kalman Filter and with its modifications Unscented Kalman Filter, nprKF and with their joint versions. The Joint Unscented Kalman Filter was not used for this purpose before. We compare the performance of these filters and of the classic Truncated Backpropagation Through Time (BPTT(h)) in two experiments for next-symbol prediction - word sequence generated by Reber automaton and the sequence generated by quantising the activations of a laser in a chaotic regime. All the new filters achieved significantly better results.
机译:与经典前馈神经网络相反,递归神经网络可以更好地处理具有时空结构的输入,例如符号时间序列。由于在较长的输入序列上进行递归神经网络训练的经典梯度方法收敛非常缓慢且缓慢,因此需要其他方法。我们描述了扩展卡尔曼滤波器及其改进的无味卡尔曼滤波器,nprKF及其联合版本的训练方法。以前没有为此目的使用联合无味卡尔曼滤波器。我们在两个用于下一个符号预测的实验中比较了这些滤波器和经典的截断时间反向传播(BPTT(h))的性能-Reber自动机生成的单词序列和量化混沌中激光的激活所生成的序列政权。所有新的过滤器均取得了明显更好的结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号