首页> 外文会议>AIAA infotech@aerospace conference;AIAA Sci'Tech forum >Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units
【24h】

Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

机译:使用多个共享内存图形处理单元的多学科仿真加速

获取原文

摘要

For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the multi-block, structured grid, finite-volume numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Individual solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.
机译:为了优化和分析涡轮机械及其他设计,可以结合热传导方程求解理想可压缩气体的非稳态Favre平均流场微分方程。我们使用多块结构化网格有限体积数值技术求解所有方程,并使用非稳态模拟的双重时间步方案。我们的数字求解器代码针对NVIDIA生产的具有CUDA功能的图形处理单元(GPU)。利用MPI,我们的求解器可以跨网络计算笔记运行,其中每个MPI进程都可以使用GPU或中央处理单元(CPU)内核进行主求解器计算。我们使用基于Fermi架构的NVIDIA Tesla C2050 / C2070 GPU,并将我们得到的性能与Intel Zeon X5690 CPU进行比较。对于足够密集的计算网格,转换为CUDA的单个求解器例程通常在GPU上运行速度大约快10倍。我们使用了共轭圆柱体计算网格,并使用4个密度越来越大的计算网格进行了湍流稳定流模拟。我们最密集的计算网格分为13个块,每个块包含1033x1033网格点,每个域块总计1387万个网格点或107万个网格点。为了获得整体加速,我们比较了求解器迭代循环的执行时间,包括所有与资源密集型GPU相关的内存副本。将8个GPU的性能与8个CPU的性能进行比较,使用最密集的计算网格时,整体加速比达到6.0。这相当于8-GPU仿真的运行速度比单CPU仿真快约39.5倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号