首页> 外文会议>International conference on Parallel architectures and compilation techniques >Architectural support for operating system-driven CMP cache management
【24h】

Architectural support for operating system-driven CMP cache management

机译:用于操作系统驱动的CMP缓存管理的架构支持

获取原文

摘要

The role of the operating system (OS) in managing shared resources such as CPU time, memory, peripherals, and even energy is well motivated and understood [23]. Unfortunately, one key resource-lower-level shared cache in chip multi-processors-is commonly managed purely in hardware by rudimentary replacement policies such as least-recentlyused (LRU). The rigid nature of the hardware cache management policy poses a serious problem since there is no single best cache management policy across all sharing scenarios. For example, the cache management policy for a scenario where applications from a single organization are running under "best effort" performance expectation is likely to be different from the policy for a scenario where applications from competing business entities (say, at a third party data center) are running under a minimum service level expectation. When it comes to managing shared caches, there is an inherent tension between flexibility and performance. On one hand, managing the shared cache in the OS offers immense policy flexibility since it may be implemented in software. Unfortunately, it is prohibitively expensive in terms of performance for the OS to be involved in managing temporally fine-grain events such as cache allocation. On the other hand, sophisticated hardware-only cache management techniques to achieve fair sharing or throughput maximization have been proposed. But they offer no policy flexibility.This paper addresses this problem by designing architectural support for OS to efficiently manage shared caches with a wide variety of policies. Our scheme consists of a hardware cache quota management mechanism, an OS interface and a set of OS level quota orchestration policies. The hardware mechanism guarantees that OS-specified quotas are enforced in shared caches, thus eliminating the need for (and the performance penalty of) temporally fine-grained OS intervention. The OS retains policy flexibility since it can tune the quotas during regularly scheduled OS interventions. We demonstrate that our scheme can support a wide range of policies including policies that provide (a) passive performance differentiation, (b) reactive fairness by miss-rate equalization and (c) reactive performance differentiation.
机译:操作系统(OS)的管理共享资源,例如CPU时间,存储器,外围设备,甚至能量的作用是公动机和理解[23]。不幸的是,在一个芯片键资源较低级别共享高速缓存的多处理器 - 通常纯粹由硬件通过初步替换策略如管理至少-recentlyused(LRU)。因为在所有共享的情况没有一个最好的高速缓存管理策略的硬件高速缓存管理政策的刚性性质造成了严重的问题。例如,对于从一个单一的组织应用程序在“尽力而为”的表现预期运行方案缓存管理政策可能的方案是从政策不同,其中来自竞争的业务实体(比如说,在一个第三方数据应用中心)的下一个最低服务水平的预期运行。当涉及到管理共享高速缓存,有灵活性和性能之间存在固有的张力。一方面,在OS提供了巨大的政策灵活性,管理共享高速缓存,因为它可以在软件中实现。不幸的是,它在性能方面非常昂贵的在管理时间细粒度的事件,如高速缓存分配涉及操作系统。在另一方面,先进的纯硬件高速缓存管理技术来实现公平分享和吞吐量最大化已经被提出。但是,他们用各种各样的政策,以有效地管理共享缓存设计的OS架构支持不提供政策flexibility.This文件解决了这个问题。我们的方案包括硬件高速缓存配额管理机制,操作系统接口和一组操作系统级别的配额协调政策的。硬件机制确保OS指定的配额是在共享缓存执行,从而消除了对需要(及的性能损失)在时间上的细粒度OS介入。操作系统保留,因为它可以调整政策的灵活性期间定期OS干预的配额。我们表明,我们的方案可以支持广泛的政策,包括政策,提供:(a)被动表现分化,(b)以误率均衡和(c)反应的业绩分化反应的公平性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号