首页> 外文会议>Workshop on big data management in clouds >A Scalable InfiniBand Network Topology-Aware Performance Analysis Tool for MPI
【24h】

A Scalable InfiniBand Network Topology-Aware Performance Analysis Tool for MPI

机译:用于MPI的可扩展Infiniband网络拓扑信息感知性能分析工具

获取原文

摘要

Over the last decade, InfiniBand (IB) has become an increasingly popular interconnect for deploying modern supercomputing systems. As supercom-puting systems grow in size and scale, the impact of IB network topology on the performance of high performance computing (HPC) applications also increase. Depending on the kind of network (FAT Tree, Tori, or Mesh), the number of network hops involved in data transfer varies. No tool currently exists that allows users of such large-scale clusters to analyze and visualize the communication pattern of HPC applications in a network topology-aware manner. In this paper, we take up this challenge and design a scalable, low-overhead InfiniBand Network Topology-Aware Performance Analysis Tool for MPI - INTAP-MPI. INTAP-MPI allows users to analyze and visualize the communication pattern of HPC applications on any IB network (FAT Tree, Tori, or Mesh). We integrate INTAP-MPI into the MVAPICH2 MPI library, allowing users of HPC clusters to seamlessly use it for analyzing their applications. Our experimental analysis shows that the INTAP-MPI is able to profile and visualize the communication pattern of applications with very low memory and performance overhead at scale.
机译:在过去十年中,Infiniband(IB)已成为部署现代超级计算系统的越来越受欢迎的互连。随着超级组织的系统大小和规模,IB网络拓扑对高性能计算(HPC)应用的影响也增加了。根据网络类型(脂肪树,Tori或Mesh),数据传输中涉及的网络跳数变化。目前没有工具允许使用这种大规模集群的用户以网络拓扑的方式分析和可视化HPC应用程序的通信模式。在本文中,我们占据了挑战并设计了一种可扩展的低开销Infiniband网络拓扑信息,用于MPI - Intap-MPI的性能分析工具。 Intap-MPI允许用户分析和可视化任何IB网络(FAT树,TORI或网格)上的HPC应用程序的通信模式。我们将Intap-MPI集成到MVAPICH2 MPI库中,允许HPC集群的用户无缝使用它来分析其应用程序。我们的实验分析表明,Intap-MPI能够通过非常低的内存和性能开销来配置和可视化应用程序的通信模式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号