首页> 外文会议>Proceedings of the 42nd Annual IEEE/ACM International Symposium on Microarchitecture >SHARP control: Controlled shared cache management in chip multiprocessors
【24h】

SHARP control: Controlled shared cache management in chip multiprocessors

机译:SHARP控制:芯片多处理器中的受控共享缓存管理

获取原文

摘要

Shared resources in a chip multiprocessors (CMPs) pose unique challenges to the seamless adoption of CMPs in virtualization environments and high performance computing systems. While sharing resources like on-chip last level cache is generally beneficial due to increased resource utilization, lack of control over management of these resources can lead to loss of determinism, faded performance isolation, and an overall lack of the notion of Quality of Service (QoS) provided to individual applications. This has direct ramifications on adhering to service level agreements in environments involving consolidation of multiple heterogeneous workloads. Although providing QoS in presence of shared resources has been addressed in the literature, it has been commonly observed that reservation of resources for QoS leads to under-utilization of resources. This paper proposes the use of formal control theory for dynamically partitioning the shared last level cache in CMPs by optimizing the last level cache space utilization among multiple concurrently executing applications with well defined service level objectives. The advantage of using formal feedback control lies in the theoretical guarantee we can provide about maximizing the utilization of the cache space in a fair manner. Using feedback control, we demonstrate that our fair speedup improvement scheme regulates cache allocation to applications dynamically such that we achieve a high fair speedup (global performance fairness metric). We also propose an adaptive, feedback control based cache partitioning scheme that achieves service differentiation among various applications with minimal impact on the fair speedup. Extensive simulations using a full system simulator with accurate timing models and a set of diverse multiprogrammed workloads show that our fair speedup improvement scheme achieves 21.9% improvement on the fair speedup metric across various benchmarks and our service differentiation scheme achieves well regulated- service differentiation.
机译:芯片多处理器(CMP)中的共享资源给CMP在虚拟化环境和高性能计算系统中的无缝采用带来了独特的挑战。虽然共享资源(如片上末级缓存)通常会因资源利用率提高而受益,但对这些资源的管理缺乏控制权可能导致确定性丧失,性能隔离性下降以及整体缺乏服务质量的概念( QoS)提供给各个应用程序。这在涉及合并多个异构工作负载的环境中遵守服务级别协议有直接的后果。尽管在文献中已经解决了在存在共享资源的情况下提供QoS的问题,但通常观察到为QoS预留资源会导致资源利用不足。本文提出使用形式化控制理论,通过在具有明确定义的服务级别目标的多个并发执行的应用程序之间优化最后一级缓存空间利用率,来动态分割CMP中共享的最后一级缓存。使用形式反馈控制的优势在于理论上的保证,即我们可以公平地最大限度地利用缓存空间。使用反馈控制,我们证明了我们的公平提速改进方案可以动态地调整对应用程序的缓存分配,从而实现较高的公平提速(全局性能公平性指标)。我们还提出了一种基于自适应,反馈控制的缓存分区方案,该方案可在各种应用程序之间实现服务差异化,并且对公平加速的影响最小。使用具有精确计时模型的完整系统模拟器和一组多样化的多程序工作负载进行的广泛仿真显示,我们的公平提速改进方案在各个基准上的公平提速指标均实现了21.9%的改善,而我们的服务差异化方案则实现了良好的监管, 服务差异化。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号