首页> 外文期刊>Parallel Computing >Failure recovery for bulk synchronous applications with MPI stages
【24h】

Failure recovery for bulk synchronous applications with MPI stages

机译:具有MPI阶段的批量同步应用程序的故障恢复

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

When an MPI program experiences a failure, the most common recovery approach is to restart all processes from a previous checkpoint and to re-queue the entire job. A disadvantage of this method is that, although the failure occurred within the main application loop, live processes must start again from the beginning of the program, along with new replacement processes-this incurs unnecessary overhead for live processes. To avoid such overheads and concomitant delays, we introduce the concept of "MPI Stages." MPI Stages saves internal MPI state in a separate checkpoint in conjunction with application state. Upon failure, both MPI and application state are recovered, respectively, from their last synchronous checkpoints and continue without restarting the overall MPI job. Live processes roll back only a few iterations within the main loop instead of rolling back to the beginning of the program, while a replacement of failed process restarts and reintegrates, thereby achieving faster failure recovery. This approach integrates well with large-scale, bulk synchronous applications and checkpoint/restart.In this article, we identify requirements for production MPI implementations to support state check-pointing with MPI Stages, which includes capturing and managing internal MPI state and serializing and deserializing user handles to MPI objects. We evaluate our fault tolerance approach with a proof-of-concept prototype MPI implementation that includes MPI Stages. We demonstrate its functionality and performance using LULESH, CoMD, and microbenchmarks. Our results show that MPI Stages reduces the recovery time for both LULESH and CoMD in comparison to checkpoint/restart and Reinit (a global-restart model). (C) 2019 Published by Elsevier B.V.
机译:当MPI程序失败时,最常见的恢复方法是从先前的检查点重新启动所有进程,并重新排队整个作业。这种方法的缺点是,尽管故障发生在主应用程序循环内,但活动进程必须与新的替换进程一起从程序的开头重新开始,这会为活动进程带来不必要的开销。为了避免此类开销和随之而来的延迟,我们引入了“ MPI阶段”的概念。MPI阶段将内部MPI状态与应用程序状态一起保存在单独的检查点中。发生故障时,将从各自的上一个同步检查点分别恢复MPI和应用程序状态,并在不重新启动整个MPI作业的情况下继续运行。实时进程仅在主循环中回滚几次迭代,而不回滚到程序的开始,而替换失败的进程将重新启动并重新集成,从而实现更快的故障恢复。此方法与大规模,批量同步应用程序和检查点/重新启动很好地集成。在本文中,我们确定了生产MPI实现的要求,以支持MPI Stages的状态检查点,其中包括捕获和管理内部MPI状态以及序列化和反序列化。用户处理MPI对象。我们使用包含MPI阶段的概念证明原型MPI实现评估了我们的容错方法。我们使用LULESH,CoMD和微基准来演示其功能和性能。我们的结果表明,与检查点/重新启动和重新启动(全局重新启动模型)相比,MPI阶段减少了LULESH和CoMD的恢复时间。 (C)2019由Elsevier B.V.发布

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号