首页> 外文会议>RIVF International Conference on Computing and Communication Technologies >Measurements of errors in large-scale computational simulations at runtime
【24h】

Measurements of errors in large-scale computational simulations at runtime

机译:在运行时进行大规模计算仿真中的误差测量

获取原文

摘要

Verification of simulation codes often involves comparing the simulation output behavior to a known model using graphical displays or statistical tests. Such process is challenging for large-scale scientific codes at runtime because they often involve thousands of processes, and generate very large data structures. In our earlier work, we proposed a statistical framework for testing the correctness of large-scale applications using their runtime data. This paper studies the concept of ‘distribution distance’ and establishes the requirements in measuring the runtime differences between a verified stochastic simulation system and its larger-scale counterpart. The paper discusses two types of distribution distance including the χ2 distance and the histogram distance. We prototype the verification methodology and evaluate its performance on two production simulation programs. All experiments were conducted on a 20,000-core Cray XE6.
机译:验证仿真代码通常涉及使用图形显示或统计测试将仿真输出行为与已知模型进行比较。对于大型科学代码,这种过程在运行时具有挑战性,因为它们通常涉及数千个过程,并生成非常大的数据结构。在我们早期的工作中,我们提出了一个统计框架,用于使用大型应用程序的运行时数据测试其正确性。本文研究了“分布距离”的概念,并确立了在测量经过验证的随机仿真系统与其大规模对应系统之间的运行时间差异时的要求。本文讨论了两种类型的分布距离,包括χ 2 距离和直方图距离。我们对验证方法进行原型设计,并在两个生产模拟程序上评估其性能。所有实验均在20,000核Cray XE6上进行。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号