首页> 外文会议>Design, Automation Test in Europe Conference Exhibition >Machine Learned Machines: Adaptive co-optimization of caches, cores, and On-chip Network
【24h】

Machine Learned Machines: Adaptive co-optimization of caches, cores, and On-chip Network

机译:机器学习的机器:高速缓存,内核和片上网络的自适应协同优化

获取原文

摘要

Modern multicore architectures require runtime optimization techniques to address the problem of mismatches between the dynamic resource requirements of different processes and the runtime allocation. Choosing between multiple optimizations at runtime is complex due to the non-additive effects, making the adaptiveness of the machine learning techniques useful. We present a novel method, Machine Learned Machines (MLM), by using Online Reinforcement Learning (RL) to perform dynamic partitioning of the last level cache (LLC), along with dynamic voltage and frequency scaling (DVFS) of the core and uncore (interconnection network and LLC). We show that the co-optimization results in much lower energy-delay product (EDP) than any of the techniques applied individually. The results show an average of 19.6% EDP and 2.6% execution time improvement over the baseline.
机译:现代多核体系结构需要运行时优化技术来解决不同进程的动态资源需求与运行时分配之间的不匹配问题。由于具有非累加效应,因此在运行时在多个优化之间进行选择很复杂,这使得机器学习技术的自适应性非常有用。我们通过使用在线强化学习(RL)来执行末级缓存(LLC)的动态分区以及核心和非核心的动态电压和频率缩放(DVFS),从而提出一种新颖的方法,机器学习机(MLM)。互连网络和LLC)。我们显示,与单独应用的任何技术相比,共同优化所产生的能量延迟乘积(EDP)低得多。结果显示,与基线相比,平均EDP降低了19.6%,执行时间缩短了2.6%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号