首页> 外文期刊>Journal of Grid Computing >Energy-Efficient Thermal-Aware Autonomic Management of Virtualized HPC Cloud Infrastructure
【24h】

Energy-Efficient Thermal-Aware Autonomic Management of Virtualized HPC Cloud Infrastructure

机译:虚拟化HPC云基础架构的节能型热感知自主管理

获取原文
获取原文并翻译 | 示例

摘要

Virtualized datacenters and clouds are being increasingly considered for traditional High-Performance Computing (HPC) workloads that have typically targeted Grids and conventional HPC platforms. However, maximizing energy efficiency and utilization of datacenter resources, and minimizing undesired thermal behavior while ensuring application performance and other Quality of Service (QoS) guarantees for HPC applications requires careful consideration of important and extremely challenging tradeoffs. Virtual Machine (VM) migration is one of the most common techniques used to alleviate thermal anomalies (i.e., hotspots) in cloud datacenter servers as it reduces load and, hence, the server utilization. In this article, the benefits of using other techniques such as voltage scaling and pinning (traditionally used for reducing energy consumption) for thermal management over VM migrations are studied in detail. As no single technique is the most efficient to meet temperature/performance optimization goals in all situations, an autonomic approach that performs energy-efficient thermal management while ensuring the QoS delivered to the users is proposed. To address the problem of VM allocation that arises during VM migrations, an innovative application-centric energy-aware strategy for Virtual Machine (VM) allocation is proposed. The proposed strategy ensures high resource utilization and energy efficiency through VM consolidation while satisfying application QoS by exploiting knowledge obtained through application profiling along multiple dimensions (CPU, memory, and network bandwidth utilization). To support our arguments, we present the results obtained from an experimental evaluation on real hardware using HPC workloads under different scenarios.
机译:虚拟化数据中心和云越来越多地用于传统的高性能计算(HPC)工作负载,这些工作负载通常以网格和传统的HPC平台为目标。但是,在确保HPC应用程序的性能和其他服务质量(QoS)保证的同时,最大程度地提高能源效率和数据中心资源的利用率,并最大程度地减少不良的热行为,需要仔细考虑重要且极富挑战性的折衷方案。虚拟机(VM)迁移是减轻云数据中心服务器中的热异常(即热点)的最常用技术之一,因为它可以减少负载并因此降低服务器利用率。在本文中,将详细研究在VM迁移过程中使用其他技术(例如电压缩放和钉扎(传统上用于减少能耗))进行热管理的好处。由于没有一种技术在所有情况下都能最有效地满足温度/性能优化目标,因此提出了一种自动方法,该方法可以执行节能的热管理,同时确保向用户交付的QoS。为了解决在虚拟机迁移过程中出现的虚拟机分配问题,提出了一种创新的以应用程序为中心的虚拟机(VM)分配能源感知策略。所提出的策略可通过VM整合确保高资源利用率和能源效率,同时通过利用沿多个维度(CPU,内存和网络带宽利用率)进行应用程序分析获得的知识来满足应用程序QoS。为了支持我们的论点,我们介绍了在不同情况下使用HPC工作负载对真实硬件进行实验评估所得的结果。

著录项

  • 来源
    《Journal of Grid Computing》 |2012年第3期|p.447-473|共27页
  • 作者单位

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

    NSF Cloud and Autonomic Computing Center, Rutgers Discovery Informatics Institute, Rutgers University, 94 Brett Road, Piscataway, NJ, 08854, USA;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Cloud infrastructure; Virtualization; Thermal management; Energy-efficiency;

    机译:云基础架构;虚拟化;热管理;能源效率;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号