首页> 外文期刊>Computational Astrophysics and Cosmology >On the parallelization of stellar evolution codes
【24h】

On the parallelization of stellar evolution codes

机译:关于恒星演化码的并行化

获取原文
           

摘要

Multidimensional nucleosynthesis studies with hundreds of nuclei linked through thousands of nuclear processes are still computationally prohibitive. To date, most nucleosynthesis studies rely either on hydrostatic/hydrodynamic simulations in spherical symmetry, or on post-processing simulations using temperature and density versus time profiles directly linked to huge nuclear reaction networks. Parallel computing has been regarded as the main permitting factor of computationally intensive simulations. This paper explores the different pros and cons in the parallelization of stellar codes, providing recommendations on when and how parallelization may help in improving the performance of a code for astrophysical applications. We report on different parallelization strategies succesfully applied to the spherically symmetric, Lagrangian, implicit hydrodynamic code SHIVA , extensively used in the modeling of classical novae and type I X-ray bursts. When only matrix build-up and inversion processes in the nucleosynthesis subroutines are parallelized (a suitable approach for post-processing calculations), the huge amount of time spent on communications between cores, together with the small problem size (limited by the number of isotopes of the nuclear network), result in a much worse performance of the parallel application compared to the 1-core, sequential version of the code. Parallelization of the matrix build-up and inversion processes in the nucleosynthesis subroutines is not recommended unless the number of isotopes adopted largely exceeds 10,000. In sharp contrast, speed-up factors of 26 and 35 have been obtained with a parallelized version of SHIVA , in a 200-shell simulation of a type I X-ray burst carried out with two nuclear reaction networks: a reduced one, consisting of 324 isotopes and 1392 reactions, and a more extended network with 606 nuclides and 3551 nuclear interactions. Maximum speed-ups of ~41 (324-isotope network) and ~85 (606-isotope network), are also predicted for 200 cores, stressing that the number of shells of the computational domain constitutes an effective upper limit for the maximum number of cores that could be used in a parallel application.
机译:多维核合成研究涉及通过成千上万个核过程链接的数百个核,在计算上仍然令人望而却步。迄今为止,大多数核合成研究都依赖于球形对称的流体静力学/流体力学模拟,或者依赖于使用直接与庞大的核反应网络相关的温度和密度与时间分布的后处理模拟。并行计算已被视为计算密集型仿真的主要许可因素。本文探讨了恒星代码并行化的优缺点,并就何时以及如何并行化可帮助改善天体物理应用代码的性能提供了建议。我们报告了成功应用于球形对称,拉格朗日隐式流体力学代码SHIVA的不同并行化策略,这些代码广泛用于经典新星和I型X射线爆发的建模。当仅核合成子例程中的矩阵建立和求逆过程并行时(一种适合进行后处理计算的方法),则需要花费大量时间在核之间进行通信,并且问题体积较小(受同位素数目的限制)与1核顺序版代码相比,并行应用程序的性能要差得多。不建议在核合成子例程中并行化矩阵建立和反转过程,除非采用的同位素数目大大超过10,000。与之形成鲜明对比的是,使用SHIVA的并行化版本,在使用两个核反应网络进行的200型I型X射线爆发的200壳模拟中,获得了26和35的加速因子。 324个同位素和1392个反应,以及具有606个核素和3551个核相互作用的更扩展的网络。还预测了200个核的最大加速〜41(324个同位素网络)和〜85(606个同位素网络),并强调计算域的壳数构成了最大核数的有效上限。可以在并行应用程序中使用的内核。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号