首页> 外文OA文献 >Scalability of Incompressible Flow Computations on Multi-GPU Clusters Using Dual-Level and Tri-Level Parallelism
【2h】

Scalability of Incompressible Flow Computations on Multi-GPU Clusters Using Dual-Level and Tri-Level Parallelism

机译:使用二级和三级并行度的多GPU群集上不可压缩流量计算的可伸缩性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

High performance computing using graphics processing units (GPUs) is gaining popularity in the scientific computing field, with many large compute clusters being augmented with multiple GPUs in each node. We investigate hybrid tri-level (MPI-OpenMP-CUDA) parallel implementations to explore the efficiency and scalability of incompressible flow computations on GPU clusters up to 128 GPUS. This work details some of the unique issues faced when merging fine-grain parallelism on the GPU using CUDA with coarse-grain parallelism using OpenMP for intra-node and MPI for inter-node communication. Comparisons between the tri-level MPI-OpenMP-CUDA and dual-level MPI-CUDA implementations are shown using computationally large computational fluid dynamics (CFD) simulations. Our results demonstrate that a tri-level parallel implementation does not provide a significant advantage in performance over the dual-level implementation, however further research is needed to justify our conclusion for a cluster with a high GPU per node density or when using software that can utilize OpenMP’s fine-grain parallelism more effectively.
机译:使用图形处理单元(GPU)的高性能计算在科学计算领域中正变得越来越流行,许多大型计算集群都在每个节点中增加了多个GPU。我们研究了混合三级(MPI-OpenMP-CUDA)并行实现,以探索多达128个GPU的GPU集群上不可压缩流计算的效率和可伸缩性。这项工作详细介绍了在将使用CUDA的GPU上的细粒度并行性与使用OpenMP用于节点内和使用MPI用于节点间通信的粗粒度并行性合并时面临的一些独特问题。使用大型计算流体动力学(CFD)模拟显示了三级MPI-OpenMP-CUDA和双级MPI-CUDA实现之间的比较。我们的结果表明,三层并行实现与双层实现相比并没有提供明显的性能优势,但是需要进一步研究来证明我们的结论,即对于每个节点密度较高的GPU集群或使用可更有效地利用OpenMP的细粒度并行性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号