...
首页> 外文期刊>Computers, IEEE Transactions on >Size-Aware Cache Management for Compressed Cache Architectures
【24h】

Size-Aware Cache Management for Compressed Cache Architectures

机译:压缩缓存体系结构的大小感知缓存管理

获取原文
获取原文并翻译 | 示例
           

摘要

A practical way to increase the effective capacity of a microprocessor’s cache, without physically increasing the cache size, is to employ data compression. Last-Level Caches (LLC) are particularly amenable to such compression schemes, since the primary purpose of the LLC is to minimize the miss rate, i.e., it directly benefits from a larger logical capacity. In compressed LLCs, the cacheline size varies depending on the achieved compression ratio. Our observations indicate that this size information gives useful hints when managing the cache (e.g., when selecting a victim), which can lead to increased cache performance. However, there are currently no replacement policies tailored to compressed LLCs; existing techniques focus primarily on locality information. This article introduces the concept of as a way to maximize the performance of compressed caches. Upon analyzing the benefits of considering size information in the management of compressed caches, we propose a novel mechanism—called Effective Capacity Maximizer (ECM)—to further enhance the performance and energy consumption of compressed LLCs. The proposed technique revolves around four fundamental principles: ECM Insertion (ECM-I), ECM Promotion (ECM-P), ECM Eviction Scheduling (ECM-ES), and ECM Replacement (ECM-R). Extensive simulations with memory traces from real applications running on a full-system simulator demonstrate significant improvements compared to compressed cache schemes employing conventional locality-aware cache replacement policies. Specifically, our ECM shows an average effective capacity increase of 18.4 percent over the Least-Recently Used (LRU) policy, and 23.9 percent over the Dynamic Re-Reference Interval Prediction (DRRIP) scheme. This translates into average system performance improvements of 7.2 percent over LRU and 4.2 percent over DRRIP. Moreover, the average energy - onsumption is also reduced by 5.9 percent over LRU and 3.8 percent over DRRIP.
机译:在不物理增加缓存大小的情况下,增加微处理器缓存的有效容量的一种实用方法是采用数据压缩。末级高速缓存(LLC)特别适合于这种压缩方案,因为LLC的主要目的是使未命中率最小化,即它直接受益于更大的逻辑容量。在压缩的LLC中,高速缓存行的大小取决于所达到的压缩率。我们的观察表明,该大小信息在管理缓存时(例如,在选择受害者时)提供了有用的提示,这可以提高缓存的性能。但是,目前还没有针对压缩的LLC量身定制的替代政策。现有技术主要集中在位置信息上。本文介绍了作为最大化压缩缓存性能的一种方法的概念。在分析了在压缩缓存的管理中考虑大小信息的好处后,我们提出了一种称为有效容量最大化器(ECM)的新机制,以进一步提高压缩LLC的性能和能耗。拟议的技术围绕四个基本原则:ECM插入(ECM-I),ECM促进(ECM-P),ECM逐出调度(ECM-ES)和ECM替换(ECM-R)。与使用常规的可感知位置的缓存替换策略的压缩缓存方案相比,使用在整个系统模拟器上运行的实际应用程序进行的内存跟踪进行的广泛仿真显示出了显着改进。具体来说,我们的ECM显示,与最近最少使用(LRU)策略相比,平均有效容量增加了18.4%,与动态参考间隔预测(DRRIP)计划相比,增加了23.9%。这意味着平均系统性能比LRU提高了7.2%,比DRRIP提高了4.2%。此外,平均能耗比LRU降低了5.9%,比DRRIP降低了3.8%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号