首页> 外文期刊>Computers & mathematics with applications >GPU acceleration of CaNS for massively-parallel direct numerical simulations of canonical fluid flows
【24h】

GPU acceleration of CaNS for massively-parallel direct numerical simulations of canonical fluid flows

机译:GPU加速罐用于大规模平行的直接数值模拟规范流体流动

获取原文
获取原文并翻译 | 示例

摘要

This work presents the GPU acceleration of the open-source code CaNS for very fast massively-parallel simulations of canonical fluid flows. The distinct feature of the many-CPU Navier-Stokes solver in CaNS is its fast direct solver for the second-order finite-difference Poisson equation, based on the method of eigenfunction expansions. The solver implements all the boundary conditions valid for this type of problems in a unified framework. Here, we extend the solver for GPU-accelerated clusters using CUDA Fortran. The porting makes extensive use of CUF kernels and has been greatly simplified by the unified memory feature of CUDA Fortran, which handles the data migration between host (CPU) and device (GPU) without defining new arrays in the source code. The overall implementation has been validated against benchmark data for turbulent channel flow and its performance assessed on a NVIDIA DGX-2 system (16 T V100 32Gb, connected with NVLink via NVSwitch). The wall-clock time per time step of the GPU-accelerated implementation is impressively small when compared to its CPU implementation on state-of-the-art many-CPU clusters, as long as the domain partitioning is sufficiently small that the data resides mostly on the GPUs. The implementation has been made freely available and open source under the terms of an MIT license. (C) 2020 Elsevier Ltd. All rights reserved.
机译:这项工作介绍了开源代码的GPU加速度,用于非常快速的规范流体流动的大量平行模拟。罐中的许多CPU Navier-Stokes求解器的独特特征是其基于特征函数扩展的方法的二阶有限差异泊松方程的快速直接求解器。求解器在统一框架中实现了对此类型问题有效的所有边界条件。在这里,我们使用CUDA Fortran扩展了用于GPU加速簇的求解器。 Porting对CUF内核进行广泛使用,并且通过CUDA Fortran的统一内存功能大大简化,其处理主机(CPU)和设备(GPU)之间的数据迁移,而无需在源代码中定义新阵列。整体实施是针对湍流通道流的基准数据验证,并在NVIDIA DGX-2系统上进行了评估的性能(16 T V100 32GB,通过NVSwitch与NVLink连接)。与其在最先进的多CPU集群上的CPU实现相比,GPU加速实现的每个时间步骤的壁钟时间令人印象深刻,只要域分区足够小,数据驻留了在GPU上。根据麻省理工学院许可证的条款,该实施已自由可用,开源。 (c)2020 elestvier有限公司保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号