首页> 外文会议>IEEE Conference on High Performance Extreme Computing >GPU accelerated geometric multigrid method: Comparison with preconditioned conjugate gradient
【24h】

GPU accelerated geometric multigrid method: Comparison with preconditioned conjugate gradient

机译:GPU加速几何多重网格方法:与预处理共轭梯度的比较

获取原文

摘要

Scientific applications are typically compute intensive, often due to the requirement of solving large sparse linear systems of equations. The geometric multigrid method (GMG) is one of the most efficient algorithms for solving these systems and is well suited for parallelization. Herein we focus on an in-depth analysis of a GPU-based GMG implementation and compare the results against an optimized preconditioned conjugate gradient method. The tests indicate that the smoothing step is the most time consuming operation, and the best performing GMG variant is the V-cycle scheme with 312 smoothing step configuration (3 iterations during restriction, 1 at the coarsest level, and 2 iterations during prolongation). The discretization stencil has a major effect on the runtime and its choice requires a trade-off between execution time performance and numerical accuracy. Overall, the GMG method offers a speed-up of 7.1x-9.2x over the PCG method on the same hardware configuration, while also leading to a smaller average residual.
机译:科学应用通常是计算密集型的,通常是由于需要求解大型稀疏线性方程组。几何多重网格方法(GMG)是解决这些系统最有效的算法之一,非常适合并行化。在这里,我们专注于对基于GPU的GMG实现的深入分析,并将结果与​​优化的预处理共轭梯度方法进行比较。测试表明,平滑步骤是最耗时的操作,而性能最佳的GMG变体是具有312个平滑步骤配置的V循环方案(限制期间进行3次迭代,最粗级进行1次迭代,而延长阶段进行2次迭代)。离散化模板对运行时具有重大影响,其选择需要在执行时间性能和数值精度之间进行权衡。总体而言,在相同的硬件配置下,GMG方法的速度比PCG方法高7.1倍至9.2倍,同时平均残差也较小。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号