首页> 外文会议>ASME International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems >IMPROVING ENERGY EFFICIENCY IN DATA CENTERS BY CONTROLLING TASK DISTRIBUTION AND COOLING
【24h】

IMPROVING ENERGY EFFICIENCY IN DATA CENTERS BY CONTROLLING TASK DISTRIBUTION AND COOLING

机译:通过控制任务分配和冷却来提高数据中心的能源效率

获取原文
获取外文期刊封面目录资料

摘要

The rapid growth in cloud computing, the Internet of Things (IoT), and data processing via Machine Learning (ML), have greatly increased our need for computing resources. Given this rapid growth, it is expected that data centers will consume more and more of our global energy supply. Improving their energy efficiency is therefore crucial. One of the biggest sources of energy consumption is the energy required to cool the data centers, and ensure that the servers stay within their intended operating temperature range. Indeed, about 40% of a data center's total power consumption is for air conditioning. Here, we study how the server air inlet and outlet, as well as the CPU, temperatures depend upon server loads typical of real Internet Protocol (IP) traces. The trace data used here are from Google clusters and include the times, job and task ID, as well as the number and usage of CPU cores. The resulting IT loads are distributed using standard load-balancing methods such as Round Robin (RR) and the CPU utilization method. Experiments are conducted in the Data Center Laboratory (DCL) at the Georgia Institute of Technology to monitor the server outlet air temperature, as well as real-time CPU temperatures for servers at different heights within the rack. Server temperatures were measured by on-line temperature monitoring with Xbee, Raspberry PI, Arduino, and hot-wire anemometers. Given that the temperature response varies with server position, in part due to spatial variations in the cooling airflow over the rack inlet and the server fan speeds, a new load-balancing approach that accounts for spatially varying temperature response within a rack is tested and validated in this paper.
机译:云计算的快速增长,事物互联网(物联网)和通过机器学习(ML)的数据处理,大大增加了对计算资源的需求。鉴于这种快速增长,预计数据中心将消耗越来越多的全球能源供应。因此,提高其能效是至关重要的。最大的能源来源之一是冷却数据中心所需的能量,并确保服务器保持在其预期的工作温度范围内。实际上,大约40%的数据中心总功耗为空调。在这里,我们研究服务器空气入口和插座以及CPU,温度如何取决于真实互联网协议(IP)迹线的服务器加载。这里使用的跟踪数据来自Google集群,并包括时间,作业和任务ID,以及CPU核心的数量和使用情况。由此产生的IT负载使用标准负载平衡方法分发,例如循环(RR)和CPU利用方法。实验在佐治亚州理工学院的数据中心实验室(DCL)进行,以监控服务器出口空气温度,以及机架内不同高度的服务器的实时CPU温度。通过使用Xbee,Raspberry Pi,Arduino和热线风速计测量服务器温度测量。考虑到温度响应随着服务器位置而变化,部分原因是由于机架入口上的冷却气流和服务器风扇速度的空间变化,测试了用于在机架内的空间变化温度响应的新负载平衡方法进行测试和验证在本文中。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号