首页> 外文会议>Artificial Neural Networks in Engineering Conference (ANNIE'98) held November 1-4, 1998, In St.Louis, Missouri, U.S.A. >Training recurrent neural networks for very high performance with the extended kalman algorithm
【24h】

Training recurrent neural networks for very high performance with the extended kalman algorithm

机译:使用扩展卡尔曼算法训练递归神经网络以实现非常高的性能

获取原文
获取原文并翻译 | 示例

摘要

In our diagnostics studies, we want to develop solutions to dynamical estimation problems on real systems whose behavior is represented by discrete time samples of sensor data captured and stored in a very large database. We have developed methods to utilize the generalized extended Kalman filter (GEKF) for training time lagged recurrent neural networks (TLRNN's). Our results indicate that the GEKF algorithm can be effectively used to train large TLRNN's on extensive time series of noisy data, without the need to add ad hoc noise in the training process to sustain performance improvement. These results have been achieved not only on the training data but in tests of generalization performance, measured by analysis of out of sample data drawn from a similar, but different system.
机译:在我们的诊断研究中,我们希望为实际系统上的动态估计问题开发解决方案,该系统的行为由捕获并存储在非常大的数据库中的传感器数据的离散时间样本表示。我们已经开发出了利用广义扩展卡尔曼滤波器(GEKF)来训练时间滞后递归神经网络(TLRNN)的方法。我们的结果表明,GEKF算法可以有效地用于在大量噪声数据的时间序列上训练大型TLRNN,而无需在训练过程中添加临时噪声来维持性能改善。这些结果不仅在训练数据上获得,而且在泛化性能测试中也得到了实现,这些测试是通过分析从相似但不同的系统中抽取的样本数据来衡量的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号