【24h】

Open Issues in MPI Implementation

机译:MPI实施中的未解决问题

获取原文
获取原文并翻译 | 示例

摘要

MPI (the Message Passing Interface) continues to be the dominant programming model for parallel machines of all sizes, from small Linux clusters to the largest parallel supercomputers such as IBM Blue Gene/L and Cray XT3. Although the MPI standard was released more than 10 years ago and a number of implementations of MPI are available from both vendors and research groups, MPI implementations still need improvement in many areas. In this paper, we discuss several such areas, including performance, scalability, fault tolerance, support for debugging and verification, topology awareness, collective communication, derived datatypes, and parallel I/O. We also present results from experiments with several MPI implementations (MPICH2, Open MPI, Sun, IBM) on a number of platforms (Linux clusters, Sun and IBM SMPs) that demonstrate the need for performance improvement in one-sided communication and support for multithreaded programs.
机译:从小型Linux群集到最大的并行超级计算机(例如IBM Blue Gene / L和Cray XT3),MPI(消息传递接口)仍然是各种规模的并行计算机的主要编程模型。尽管MPI标准是在10年前发布的,并且供应商和研究小组都可以使用许多MPI的实现,但是MPI的实现在许多领域仍然需要改进。在本文中,我们讨论了几个此类领域,包括性能,可伸缩性,容错性,对调试和验证的支持,拓扑意识,集体通信,派生数据类型和并行I / O。我们还展示了在多个平台(Linux集群,Sun和IBM SMP)上对几种MPI实现(MPICH2,Open MPI,Sun,IBM)的实验结果,这些结果证明了单边通信中性能改进和对多线程支持的需求。程式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号