...
首页> 外文期刊>Biomedical Engineering, IEEE Transactions on >Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units
【24h】

Accelerating Cardiac Bidomain Simulations Using Graphics Processing Units

机译:使用图形处理单元加速心脏双域模拟

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6–20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.
机译:心脏的解剖学逼真和生物物理详细的多尺度计算机模型在增进我们对健康和疾病中综合心功能的理解中起着越来越重要的作用。但是,这种详细的模拟在计算上要求很高,这是更广泛地采用计算机模拟的限制因素。尽管高性能计算(HPC)硬件的当前趋势有望缓解这一问题,但由于必须使用高度可伸缩的算法来减少执行时间,因此利用这种体系结构的潜力仍然充满挑战。可替代地,正在考虑诸如图形处理单元(GPU)的加速技术。尽管GPU的潜力已在各种应用中得到了证明,但在双域仿真的背景下(大型稀疏线性系统必须与高级数值技术并行解决)的优势尚不清楚。在这项研究中,通过使用最新的兔心室模型运行强大的可扩展性基准,证明了多GPU双域仿真的可行性。在完全非结构化网格上使用有限元方法(FEM)对模型进行空间离散。 GPU代码直接来自大型的现有代码,即心律失常研究软件包(CARP),对代码库的干扰很小。总体而言,与相同数量的CPU内核相比,在6-20个GPU上运行的基准测试中,双域仿真的速度提高了11.8至16.3倍。为了与涉及20个GPU的最快GPU仿真相匹配,国家超级计算设施需要476个CPU内核。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号