首页> 外文会议>2010 IEEE 16th international symposium on high performance computer architecture >CHOP: Adaptive Filter-Based DRAM Caching for CMP Server Platforms
【24h】

CHOP: Adaptive Filter-Based DRAM Caching for CMP Server Platforms

机译:CHOP:用于CMP服务器平台的基于过滤器的自适应DRAM缓存

获取原文

摘要

As manycore architectures enable a large number of cores on the die, a key challenge that emerges is the availability of memory bandwidth with conventional DRAM solutions. To address this challenge, integration of large DRAM caches that provide as much as 5× higher bandwidth and as low as l/3rd of the latency (as compared to conventional DRAM) is very promising. However, organizing and implementing a large DRAM cache is challenging because of two primary tradeoffs: (a) DRAM caches at cache line granularity require too large an on-chip tag area that makes it undesirable and (b) DRAM caches with larger page granularity require too much bandwidth because the miss rate does not reduce enough to overcome the bandwidth increase. In this paper, we propose CHOP (Caching HOt Pages) in DRAM caches to address these challenges. We study several filter-based DRAM caching techniques: (a) a filter cache (CHOP-FC) that profiles pages and determines the hot subset of pages to allocate into the DRAM cache, (b) a memory-based filter cache (CHOP-MFC) that spills and fills filter state to improve the accuracy and reduce the size of the filter cache and (c) an adaptive DRAM caching technique (CHOP-AFC) to determine when the filter cache should be enabled and disabled for DRAM caching. We conduct detailed simulations with server workloads to show that our filter-based DRAM caching techniques achieve the following: (a) on average over 30% performance improvement over previous solutions, (b) several magnitudes lower area overhead in tag space required for cache-line based DRAM caches, (c) significantly lower memory bandwidth consumption as compared to page-granular DRAM caches.
机译:由于许多核体系结构可在裸片上启用大量核,因此出现的关键挑战是传统DRAM解决方案的存储带宽可用性。为了应对这一挑战,集成大型DRAM缓存可提供高达5倍的高带宽和低至1/3的延迟(与传统DRAM相比),这是非常有前途的。但是,由于两个主要的权衡因素,组织和实现大型DRAM缓存具有挑战性:(a)缓存行粒度的DRAM缓存需要太大的片上标签区域,这使其不受欢迎;以及(b)粒度较大的DRAM缓存需要带宽过多,因为未命中率降低得不足以克服带宽增加。在本文中,我们提出了DRAM高速缓存中的CHOP(高速缓存页面)以应对这些挑战。我们研究了几种基于过滤器的DRAM缓存技术:(a)过滤器缓存(CHOP-FC),用于对页面进行概要分析并确定要分配给DRAM缓存的页面的热子集,(b)基于内存的过滤器缓存(CHOP- MFC)溢出并填充过滤器状态以提高准确性并减小过滤器缓存的大小,并且(c)自适应DRAM缓存技术(CHOP-AFC)确定何时应启用和禁用过滤器缓存以进行DRAM缓存。我们对服务器工作负载进行了详细的模拟,以表明我们基于过滤器的DRAM缓存技术可以实现以下目标:(a)与以前的解决方案相比,性能平均提高了30%以上;(b)缓存所需的标签空间面积降低了几个数量级,基于行的DRAM缓存,(c)与页粒度DRAM缓存相比,显着降低了内存带宽消耗。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号