首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium Workshops and PhD Forum >Performance Characterization of Hypervisor-and Container-Based Virtualization for HPC on SR-IOV Enabled InfiniBand Clusters
【24h】

Performance Characterization of Hypervisor-and Container-Based Virtualization for HPC on SR-IOV Enabled InfiniBand Clusters

机译:SR-IOV上HPC基于管理程序和基于集装箱虚拟化的性能表征的SR-IOV启用的InfiniBand集群

获取原文
获取外文期刊封面目录资料

摘要

Hypervisor (e.g. KVM) based virtualization has been used as a fundamental technology in cloud computing. However, it has the inherent performance overhead in the virtualized environments, more specifically, the virtualized I/O devices. To alleviate such overhead, PCI passthrough can be utilized to have exclusive access to I/O device. However, this way prevents the I/O device from sharing with multiple VMs. Single Root I/O Virtualization (SR-IOV) technology has been introduced for high-performance interconnects such as InfiniBand to address such sharing issue while having ideal performance. On the other hand, with the advances in container-based virtualization (e.g. Docker), it is also possible to reduce the virtualization overhead by deploying containers instead of VMs so that the near-native performance can be obtained. In order to build high-performance HPC cloud, it is important to fully understand the performance characteristics of different virtualization solutions and virtualized I/O technologies on InfiniBand clusters. In this paper, we conduct a comprehensive evaluation using IB verbs, MPI benchmarks and applications. We characterize the performance of hypervisor-and container-based virtualization with PCI passthrough and SR-IOV for HPC on InfiniBand clusters. Our evaluation results indicate that VM with PCI passthrough (VM-PT) outperforms VM with SR-IOV (VM-SR-IOV), while SR-IOV enables efficient resource sharing. Overall, the container-based solution can deliver better performance than the hypervisor-based solution. Compared with the native performance, container with PCI passthrough (Container-PT) only incurs up to 9% overhead on HPC applications.
机译:基于管理程序(例如,KVM)的虚拟化已被用作在云计算基础技术。然而,它具有在虚拟化环境中的固有性能开销,更具体地,虚拟化I / O设备。为了减轻这样的开销,PCI直通可用于具有I / O设备的独占访问。然而,这种方式阻止了I / O设备从与多个虚拟机共享。单根I / O虚拟化(SR-IOV)技术已经推出了高性能互连,如InfiniBand的同时具有理想的性能来解决这样的共享问题。在另一方面,在基于容器的虚拟(例如泊坞)的进步,但也可以通过部署容器代替的虚拟机,从而能够得到接近本机性能,以减少虚拟化开销。为了构建高性能HPC云,重要的是要充分了解不同的虚拟化解决方案的性能和虚拟化I /于InfiniBand技术集群O技术。在本文中,我们使用IB动词,MPI基准测试和应用程序进行全面的评估。我们上的InfiniBand集群,PCI直通和SR-IOV表征管理程序和基于容器的虚拟化性能的HPC。我们的评估结果表明,与VM PCI直通(VM-PT)优于VM与SR-IOV(VM-SR-IOV),而SR-IOV能够实现高效的资源共享。总体而言,基于容器的解决方案可提供比基于管理程序的解决方案更好的性能。与本机的性能,容器PCI直通相比(集装箱-PT)只招了对HPC应用的9%的开销。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号