首页> 外文会议>ASME international technical conference and exhibition on packaging and integration of electronic and photonic microsystems >IMPROVING ENERGY EFFICIENCY IN DATA CENTERS BY CONTROLLING TASK DISTRIBUTION AND COOLING
【24h】

IMPROVING ENERGY EFFICIENCY IN DATA CENTERS BY CONTROLLING TASK DISTRIBUTION AND COOLING

机译:通过控制任务分配和冷却来提高数据中心的能源效率

获取原文

摘要

The rapid growth in cloud computing, the Internet of Things (IoT), and data processing via Machine Learning (ML), have greatly increased our need for computing resources. Given this rapid growth, it is expected that data centers will consume more and more of our global energy supply. Improving their energy efficiency is therefore crucial. One of the biggest sources of energy consumption is the energy required to cool the data centers, and ensure that the servers stay within their intended operating temperature range. Indeed, about 40% of a data center's total power consumption is for air conditioning[1]. Here, we study how the server air inlet and outlet, as well as the CPU, temperatures depend upon server loads typical of real Internet Protocol (IP) traces. The trace data used here are from Google clusters and include the times, job and task ID, as well as the number and usage of CPU cores. The resulting IT loads are distributed using standard load-balancing methods such as Round Robin (RR) and the CPU utilization method. Experiments are conducted in the Data Center Laboratory (DCL) at the Georgia Institute of Technology to monitor the server outlet air temperature, as well as real-time CPU temperatures for servers at different heights within the rack. Server temperatures were measured by on-line temperature monitoring with Xbee, Raspberry PI, Arduino, and hot-wire anemometers. Given that the temperature response varies with server position, in part due to spatial variations in the cooling airflow over the rack inlet and the server fan speeds, a new load-balancing approach that accounts for spatially varying temperature response within a rack is tested and validated in this paper.
机译:云计算,物联网(IoT)和通过机器学习(ML)进行数据处理的快速增长极大地增加了我们对计算资源的需求。鉴于这种快速增长,预计数据中心将消耗越来越多的全球能源供应。因此,提高其能源效率至关重要。能耗的最大来源之一是冷却数据中心所需的能量,并确保服务器保持在预期的工作温度范围内。实际上,数据中心的总功耗中约有40%用于空调[1]。在这里,我们研究服务器进气口和出口以及CPU的温度如何取决于真实Internet协议(IP)跟踪的典型服务器负载。此处使用的跟踪数据来自Google集群,包括时间,作业和任务ID,以及CPU内核的数量和使用情况。使用标准的负载均衡方法(例如Round Robin(RR)和CPU利用率方法)分配最终的IT负载。实验是在佐治亚理工学院的数据中心实验室(DCL)中进行的,以监视服务器出口空气温度以及机架中不同高度的服务器的实时CPU温度。通过使用Xbee,Raspberry PI,Arduino和热线风速计进行在线温度监控来测量服务器温度。鉴于温度响应随服务器位置的变化而变化,部分原因是机架入口上方的冷却气流的空间变化和服务器风扇速度的影响,因此测试并验证了一种新的负载均衡方法,该方法考虑了机架内温度响应的空间变化在本文中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号