...
首页> 外文期刊>Hydrology and Earth System Sciences >Generalisation for neural networks through data sampling and training procedures, with applications to streamflow predictions
【24h】

Generalisation for neural networks through data sampling and training procedures, with applications to streamflow predictions

机译:通过数据采样和训练过程对神经网络进行泛化,并将其应用于流量预测

获取原文
获取原文并翻译 | 示例
           

摘要

Since the 1990s, neural networks have been applied to many studies in hydrology and water resources. Extensive reviews on neural network modelling have identified the major issues affecting modelling performance: one of the most important is generalisation, which refers to building models that can infer the behaviour of the system under study for conditions represented not only in the data employed for training and testing but also for those conditions not present in the data sets but inherent to the system. This work compares five generalisation approaches: stop training, Bayesian regularisation, stacking, bagging and boosting. All have been tested with neural networks in various scientific domains stop training and stacking having been applied regularly in hydrology and water resources for some years, while Bayesian regularisation, bagging and boosting have been less common. The comparison is applied to streamflow modelling with multi-layer perceptron neural networks and the Levenberg-Marquardt algorithm as training procedure. Six catchments, with diverse hydrological behaviours, are employed as test cases to draw general conclusions and guidelines on the use of the generalisation techniques for practitioners in hydrology and water resources. All generalisation approaches provide improved performance compared with standard neural networks without generalisation. Stacking. bagging and boosting, which affect the construction of training sets, provide the best improvement from standard models, compared with stop-training and Bayesian regularisation, which regulate the training algorithm. Stacking performs better than the others although the benefit in performance is slight compared with bagging and boosting; furthermore, it is not consistent from one catchment to another. For a good combination of improvement and stability in modelling performance, the joint use of stop training or Bayesian regularisation with either bagging or boosting is recommended.
机译:自1990年代以来,神经网络已应用于许多水文学和水资源研究。关于神经网络建模的广泛评论已经确定了影响建模性能的主要问题:最重要的问题之一是泛化,泛化是指可以不仅针对训练数据和训练数据表示的条件,还可以推断正在研究的系统行为的模型。进行测试,但也要针对数据集中不存在但系统固有的条件。这项工作比较了五种推广方法:停止训练,贝叶斯正则化,堆叠,装袋和加强。所有这些都已在各个科学领域用神经网络进行了测试,停止训练和叠加已在水文学和水资源领域中定期应用了数年,而贝叶斯正则化,装袋和增强却很少见。将该比较应用于使用多层感知器神经网络和Levenberg-Marquardt算法作为训练过程的流建模。六个具有不同水文行为的流域被用作测试案例,以得出关于水文和水资源从业人员使用归纳技术的一般结论和指南。与没有泛化的标准神经网络相比,所有泛化方法均提供了改进的性能。堆叠。与调节训练算法的停止训练和贝叶斯正则化相比,影响训练集构建的装袋和增强对标准模型提供了最大的改进。堆垛的性能比其他堆垛的要好,尽管与装袋和提起相比,堆垛机在性能上的好处是微不足道的。此外,从一个流域到另一个流域并不一致。为了将建模性能的改进和稳定性很好地结合起来,建议将停止训练或贝叶斯正则化与装袋或增强一起使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号