首页> 外文会议>Asia and South Pacific Design Automation Conference >Modular reinforcement learning for self-adaptive energy efficiency optimization in multicore system
【24h】

Modular reinforcement learning for self-adaptive energy efficiency optimization in multicore system

机译:用于多核系统中自适应能效优化的模块化强化学习

获取原文

摘要

Energy-efficiency is becoming increasingly important to modern computing systems with multi-/many-core architectures. Dynamic Voltage and Frequency Scaling (DVFS), as an effective low-power technique, has been widely applied to improve energy-efficiency in commercial multi-core systems. However, due to the large number of cores and growing complexity of emerging applications, it is difficult to efficiently find a globally optimized voltage/frequency assignment at runtime. In order to improve the energy-efficiency for the overall multicore system, we propose an online DVFS control strategy based on core-level Modular Reinforcement Learning (MRL) to adaptively select appropriate operating frequencies for each individual core. Instead of focusing solely on the local core conditions, MRL is able to make comprehensive decisions by considering the running-states of multiple cores without incurring exponential memory cost which is necessary in traditional Monolithic Reinforcement Learning (RL). Experimental results on various realistic applications and different system scales show that the proposed approach improves up to 28% energy-efficiency compared to the recent individual-RL approach.
机译:对于具有多核/多核架构的现代计算系统,能源效率变得越来越重要。动态电压和频率缩放(DVFS)作为一种有效的低功耗技术,已被广泛应用于提高商业多核系统的能效。但是,由于内核数量众多,并且新兴应用程序的复杂性不断提高,因此很难在运行时有效地找到全局优化的电压/频率分配。为了提高整个多核系统的能源效率,我们提出了一种基于核级模块化强化学习(MRL)的在线DVFS控制策略,以适应性地为每个核选择合适的工作频率。 MRL不仅可以只关注本地核心条件,还可以通过考虑多个核心的运行状态来做出全面的决策,而不会产生传统的整体强化学习(RL)所需的指数内存成本。在各种实际应用和不同系统规模上的实验结果表明,与最近的个人RL方法相比,该方法可将能源效率提高28%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号