...
首页> 外文期刊>Quarterly Journal of the Royal Meteorological Society >Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts?
【24h】

Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts?

机译:多模型组合真的可以增强概率集合预报的预测能力吗?

获取原文
获取原文并翻译 | 示例
           

摘要

The success of multi-model ensemble combination has been demonstrated in many studies. Given that a multi-model contains information from all participating models, including the less skilful ones, the question remains as to why, and under what conditions, a multi-model can outperform the best participating single model. It is the aim of this paper to resolve this apparent paradox. The study is based on a synthetic forecast generator, allowing the generation of perfectly-calibrated single-model ensembles of any size and skill. Additionally, the degree of ensemble under-dispersion (or overconfidence) can be prescribed. Multi-model ensembles are then constructed from both weighted and unweighted averages of these single-model ensembles. Applying this toy model, we carry out systematic model-combination experiments. We evaluate how multi-model performance depends on the skill and overconfidence of the participating single models. It turns out that multi-model ensembles can indeed locally outperform a 'best-model' approach, but only if the single-model ensembles are overconfident. The reason is that multi-model combination reduces overconfidence, i.e. ensemble spread is widened while average ensemble-mean error is reduced. This implies a net gain in prediction skill, because probabilistic skill scores penalize overconfidence. Under these conditions, even the addition of an objectively-poor model can improve multi-model skill. It seems that simple ensemble inflation methods cannot yield the same skillimprovement. Using seasonal near-surface temperature forecasts from the DEMETER dataset, we show that the conclusions drawn from the toy-model experiments hold equally in a real multi-model ensemble prediction system.
机译:多模型合奏组合的成功已在许多研究中得到证明。考虑到多模型包含来自所有参与模型的信息,包括技术水平较低的模型,问题仍然在于,为什么以及在什么条件下,多模型可以胜过参与最好的单模型。本文的目的是解决这一明显的悖论。该研究基于一个综合的预测生成器,可以生成任何大小和技能的经过完美校准的单模型乐团。另外,可以规定整体的分散不足(或过度自信)的程度。然后,根据这些单模型乐团的加权平均值和未加权平均值来构建多模型乐团。应用此玩具模型,我们进行了系统的模型组合实验。我们评估了多模型性能如何取决于参与的单个模型的技能和过度自信。事实证明,多模型合奏确实可以在局部上胜过“最佳模型”方法,但前提是单模型合奏过于自信。原因是多模型组合减少了过分自信,即合奏分布扩大了,而平均合计平均误差减小了。这意味着预测技能会获得净收益,因为概率技能得分会惩罚过度自信。在这种情况下,即使添加客观较差的模型也可以提高多模型技能。似乎简单的整体充气方法无法带来相同的技能提升。使用来自DEMETER数据集的季节性近地表温度预测,我们表明,从玩具模型实验中得出的结论在真实的多模型集合预测系统中同样有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号