首页> 外文期刊>Parallel Computing >A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence
【24h】

A hybrid MPI-OpenMP scheme for scalable parallel pseudospectral computations for fluid turbulence

机译:混合MPI-OpenMP方案用于流体湍流的可扩展并行伪谱计算

获取原文
获取原文并翻译 | 示例

摘要

A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ~20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms.
机译:提出了一种混合方案,该方案利用MPI进行分布式内存并行处理,并使用OpenMP进行共享内存并行处理。这项工作的动机是希望在新兴的petascale,高核数,大规模并行处理系统的流体湍流的伪谱计算中获得异常高的雷诺数。混合实现从经过良好测试的可扩展MPI并行伪谱代码派生并增强。混合范式为伪光谱网格的域分解带来了新的图景,这尤其有助于理解全局数据的3D转置,这是并行快速傅里叶变换所必需的,而并行快速傅里叶变换是该结构的核心部分。数值离散化。提供了混合实现的详细信息,性能测试说明了该方法的实用性。结果表明,混合方案可在高达约20,000个计算内核的情况下实现良好的可伸缩性,最大效率为89%,平均效率为79%。提供的数据可帮助指导最佳数量的MPI任务和OpenMP线程的选择,以便在两个不同平台上最大化代码性能。

著录项

  • 来源
    《Parallel Computing》 |2011年第7期|p.316-326|共11页
  • 作者单位

    Institute for Mathematics Applied to Geosciences. National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307-3000, USA,Departamento de Fisica, Facultad de Ciencias Exactas y Naturales & IFIBA, CONICET, Ciudad Universitaria, 1428 Buenos Aires, Argentina;

    Institute for Mathematics Applied to Geosciences. National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307-3000, USA;

    Pittsburgh Supercomputing Center, 300 S. Craig Street, Pittsburgh, PA 15213. USA;

    Institute for Mathematics Applied to Geosciences. National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307-3000, USA;

  • 收录信息 美国《科学引文索引》(SCI);美国《工程索引》(EI);
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    computational fluids; numerical simulation; mpi; openmp; parallel scalability;

    机译:计算流体;数值模拟mpi;openmp;并行可伸缩性;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号