【24h】

A Scalable InfiniBand Network Topology-Aware Performance Analysis Tool for MPI

机译:适用于MPI的可扩展InfiniBand网络拓扑感知性能分析工具

获取原文

摘要

Over the last decade, InfiniBand (IB) has become an increasingly popular interconnect for deploying modern supercomputing systems. As supercom-puting systems grow in size and scale, the impact of IB network topology on the performance of high performance computing (HPC) applications also increase. Depending on the kind of network (FAT Tree, Tori, or Mesh), the number of network hops involved in data transfer varies. No tool currently exists that allows users of such large-scale clusters to analyze and visualize the communication pattern of HPC applications in a network topology-aware manner. In this paper, we take up this challenge and design a scalable, low-overhead InfiniBand Network Topology-Aware Performance Analysis Tool for MPI - INTAP-MPI. INTAP-MPI allows users to analyze and visualize the communication pattern of HPC applications on any IB network (FAT Tree, Tori, or Mesh). We integrate INTAP-MPI into the MVAPICH2 MPI library, allowing users of HPC clusters to seamlessly use it for analyzing their applications. Our experimental analysis shows that the INTAP-MPI is able to profile and visualize the communication pattern of applications with very low memory and performance overhead at scale.
机译:在过去的十年中,InfiniBand(IB)已成为部署现代超级计算系统越来越流行的互连。随着超级计算机系统规模和规模的增长,IB网络拓扑结构对高性能计算(HPC)应用程序性能的影响也随之增加。根据网络的类型(FAT树,花托或网格),数据传输中涉及的网络跃点数会有所不同。当前尚不存在允许此类大型集群的用户以网络拓扑感知方式分析和可视化HPC应用程序的通信模式的工具。在本文中,我们将迎接这一挑战,并为MPI设计一个可扩展的,低开销的InfiniBand网络拓扑感知性能分析工具-INTAP-MPI。 INTAP-MPI允许用户分析和可视化任何IB网络(FAT树,Tori或Mesh)上的HPC应用程序的通信模式。我们将INTAP-MPI集成到MVAPICH2 MPI库中,使HPC群集的用户可以无缝地使用它来分析其应用程序。我们的实验分析表明,INTAP-MPI能够以极低的内存和大规模的性能开销来分析和可视化应用程序的通信模式。

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号