首页> 外文会议>IEEE International Symposium on High Performance Computer Architecture >CHOP: Adaptive Filter-Based DRAM Caching for CMP Server Platforms
【24h】

CHOP: Adaptive Filter-Based DRAM Caching for CMP Server Platforms

机译:CHPH:基于自适应滤波器的DRAM缓存,用于CMP服务器平台

获取原文

摘要

As manycore architectures enable a large number of cores on the die, a key challenge that emerges is the availability of memory bandwidth with conventional DRAM solutions. To address this challenge, integration of large DRAM caches that provide as much as 5× higher bandwidth and as low as 1/3rd of the latency (as compared to conventional DRAM) is very promising. However, organizing and implementing a large DRAM cache is challenging because of two primary tradeoffs: (a) DRAM caches at cache line granularity require too large an on-chip tag area that makes it undesirable and (b) DRAM caches with larger page granularity require too much bandwidth because the miss rate does not reduce enough to overcome the bandwidth increase. In this paper, we propose CHOP (Caching HOt Pages) in DRAM caches to address these challenges. We study several filter-based DRAM caching techniques: (a) a filter cache (CHOP-FC) that profiles pages and determines the hot subset of pages to allocate into the DRAM cache, (b) a memory-based filter cache (CHOP-MFC) that spills and fills filter state to improve the accuracy and reduce the size of the filter cache and (c) an adaptive DRAM caching technique (CHOP-AFC) to determine when the filter cache should be enabled and disabled for DRAM caching. We conduct detailed simulations with server workloads to show that our filter-based DRAM caching techniques achieve the following: (a) on average over 30% performance improvement over previous solutions, (b) several magnitudes lower area overhead in tag space required for cache-line based DRAM caches, (c) significantly lower memory bandwidth consumption as compared to page-granular DRAM caches.
机译:作为多核架构使大量在管芯核,浮现一个关键的挑战是与传统的DRAM的解决方案的存储器带宽的可用性。为了应对这一挑战,大DRAM高速缓存提供多达5×更高的带宽和低等待时间的1/3的(相比于常规DRAM)非常有前途的集成。然而,组织和实施一个大DRAM高速缓存是具有挑战性,因为两个主要权衡:(一)DRAM缓存在高速缓存行粒度需要过大的片上的标签区域,使得它具有更大页粒度不期望的和(b)DRAM高速缓存需要过多的带宽,因为命中率不足够降低克服带宽增加。在本文中,我们提出了CHOP(高速缓存热页面)在DRAM缓存来应对这些挑战。我们学习很多基于过滤器的DRAM高速缓存技术:(1)过滤器缓存(CHOP-FC),其个人资料页面,并确定页面分配到DRAM高速缓存,(B)基于内存的过滤器高速缓存热子集(CHOP- MFC),该溢出和填充滤波器状态,以改善精度和减少过滤器高速缓存的大小和(c)的自适应DRAM高速缓存技术(CHOP-AFC)来确定,当过滤器的高速缓存应该启用和DRAM高速缓存禁用。我们进行了详细的模拟与服务器工作负载,以表明我们的基于过滤器的DRAM高速缓存技术实现以下目标:(一)平均超过了以前的解决方案30%的性能提升,(B)几个数量级降低面积开销在需要的cache标记空间行基于DRAM高速缓冲存储器,(c)中相比于页面粒状DRAM高速缓存显著降低存储器带宽消耗。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号