首页> 外文会议>IEEE/ACM international symposium on cluster, cloud and grid computing >SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience
【24h】

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

机译:SR-IOV支持InfiniBand群集上的虚拟化:早期经验

获取原文

摘要

High Performance Computing (HPC) systems are becoming increasingly complex and are also associated with very high operational costs. The cloud computing paradigm, coupled with modern Virtual Machine (VM) technology offers attractive techniques to easily manage large scale systems, while significantly bringing down the cost of computation, memory and storage. However, running HPC applications on cloud systems still remains a major challenge. One of the biggest hurdles in realizing this objective is the performance offered by virtualized computing environments, more specifically, virtualized I/O devices. Since HPC applications and communication middlewares rely heavily on advanced features offered by modern high performance interconnects such as InfiniBand, the performance of virtualized InfiniBand interfaces is crucial. Emerging hardware-based solutions, such as the Single Root I/O Virtualization (SR-IOV), offer an attractive alternative when compared to existing software-based solutions. The benefits of SR-IOV have been widely studied for GigE and 10GigE networks. However, with InfiniBand networks being increasingly adopted in the cloud computing domain, it is critical to fully understand the performance benefits of SR-IOV in InfiniBand network, especially for exploring the performance characteristics and trade-offs of HPC communication middlewares (such as Message Passing Interface (MPI), Partitioned Global Address Space (PGAS)) and applications. To the best of our knowledge, this is the first paper that offers an in-depth analysis on SR-IOV with InfiniBand. Our experimental evaluations show that for the performance of MPI and PGAS point-to-point communication benchmarks over SR-IOV with InfiniBand is comparable to that of the native InfiniBand hardware, for most message lengths. However, we observe that the performance of MPI collective operations over SR-IOV with InfiniBand is inferior to native (non-virtualized) mode. We also evaluate the trade-offs of various - M to CPU mapping policies on modern multi-core architectures and present our experiences.
机译:高性能计算(HPC)系统变得越来越复杂,并且还伴随着很高的运营成本。云计算范例与现代虚拟机(VM)技术相结合,提供了引人入胜的技术,可轻松管理大型系统,同时显着降低了计算,内存和存储的成本。但是,在云系统上运行HPC应用程序仍然是一个重大挑战。实现此目标的最大障碍之一是虚拟化计算环境(尤其是虚拟化I / O设备)提供的性能。由于HPC应用程序和通信中间件严重依赖于诸如InfiniBand之类的现代高性能互连所提供的高级功能,因此虚拟化的InfiniBand接口的性能至关重要。与基于软件的现有解决方案相比,基于硬件的新兴解决方案(例如单根I / O虚拟化(SR-IOV))提供了一种有吸引力的替代方案。 SR-IOV的优势已针对GigE和10GigE网络进行了广泛研究。但是,随着InfiniBand网络在云计算领域中越来越多的采用,全面了解SR-IOV在InfiniBand网络中的性能优势至关重要,尤其是对于探索HPC通信中间件的性能特征和权衡(例如消息传递)至关重要。接口(MPI),分区全局地址空间(PGAS)和应用程序。据我们所知,这是第一篇使用InfiniBand对SR-IOV进行深入分析的论文。我们的实验评估表明,就大多数消息长度而言,通过InfiniBand在SR-IOV上进行的MPI和PGAS点对点通信基准测试的性能可与本机InfiniBand硬件相媲美。但是,我们观察到,使用InfiniBand在SR-IOV上进行MPI集体操作的性能不如本机(非虚拟化)模式。我们还评估了现代多核体系结构上各种-M到CPU映射策略的权衡,并介绍了我们的经验。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号