首页> 外文会议>High Performance Computing - HiPC 2006; Lecture Notes in Computer Science; 4297 >Segmented Bitline Cache: Exploiting Non-uniform Memory Access Patterns
【24h】

Segmented Bitline Cache: Exploiting Non-uniform Memory Access Patterns

机译:分段位线缓存:利用非统一的内存访问模式

获取原文
获取原文并翻译 | 示例

摘要

On chip caches in modern processors account for a sizable fraction of the dynamic and leakage power. Much of this power is wasted, required only because the memory cells farthest from the sense amplifiers in the cache must discharge a large capacitance on the bitlines. We reduce this capacitance by segmenting the memory cells along the bitlines, and turning off the segmenters to reduce the overall bitline capacitance. The success of this cache relies on accessing segments near the sense-amps much more often than remote segments. We show that the access pattern to the first level data and instruction cache is extremely skewed. Only a small set of cache lines are accessed frequently. We exploit this non-uniform cache access pattern by mapping the frequently accessed cache lines closer to the sense amp. These lines are isolated by segmenting circuits on the bitlines and hence dissipate lesser power when accessed. Modifications to the address decoder enable a dynamic re-mapping of cache lines to segments. In this paper, we explore the design-space of segmenting the level one data and instruction caches. Instruction and data caches show potential power savings of 10% and 6% respectively on the subset of benchmarks simulated.
机译:现代处理器中的片上缓存占动态功耗和泄漏功耗的很大一部分。仅由于距高速缓存中的读出放大器最远的存储单元必须在位线上释放大电容,所以浪费了大部分功率。我们通过沿位线分段存储单元并关闭分段器以降低整体位线电容来减小此电容。此高速缓存的成功依赖于访问感测放大器附近的段的频率比远程段的访问频率高得多。我们表明,对第一级数据和指令高速缓存的访问模式极为不正确。经常仅访问少量缓存行。我们通过映射更接近检测放大器的频繁访问的缓存线来利用这种非均匀的缓存访问模式。这些线由位线上的分段电路隔离,因此在访问时消耗的功率较小。对地址解码器的修改可以将缓存行动态重新映射为段。在本文中,我们探索了分割一级数据和指令高速缓存的设计空间。指令和数据缓存显示,在模拟基准测试的子集上,潜在的节能量分别为10%和6%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号