首页> 外文会议>International Conference on Computer Design >DRAM-page based prediction and prefetching
【24h】

DRAM-page based prediction and prefetching

机译:基于DRAM页的预测和预取

获取原文

摘要

This paper describes and evaluates DRAM-page based cache-line prediction and prefetching architecture. The scheme takes DRAM access timing into consideration in order to reduce prefetching overhead, amortizing the high cost of DRAM access by fetching two cache lines that reside on the same DRAM-page in a single access. On each DRAM access, one or two cache blocks may be prefetched. We combine three prediction mechanisms: history mechanism, stride, and one block lookahead, make them DRAM page sensitive and deploy them in an effective adaptive prefetching strategy. Our simulation shows that the prefetch mechanism can greatly improve system performance. Using a 32-KB prediction table cache, the prefetching scheme improves performance by 26%-55% on average over a baseline configuration, depending on the memory model. Moreover, the simulation shows that prefetching is more cost-effective than simply increasing L2-cache size or using a one block lookahead prefetching scheme. Simulation results also show that DRAM-page based prefetching yields higher relative performance as processors get faster, making the prefetching scheme more attractive for next generation processors.
机译:本文介绍和评价DRAM页基于缓存行预测和预取架构。该方案采用DRAM存取时间考虑,以减少预取开销,通过获取在单个访问驻留在同一DRAM页上的两个高速缓存行摊销的DRAM接入成本高。在每个DRAM访问,一个或两个高速缓存块可以预取。我们结合三个预测机制:历史机制,跨度和一个街区先行,使他们DRAM页面敏感和有效的自适应预取的策略部署。我们的仿真结果表明,预取机制可以大大提高系统的性能。使用32-KB的预测表高速缓存时,预取方案由26%提高性能平均-55%以上的基线配置,取决于存储器模型。此外,仿真结果表明,预取是更符合成本效益比简单地增加L2高速缓存大小或使用一个块先行预取方案。仿真结果还表明,基于预取DRAM页产量相对较高性能的处理器变得更快,使得预取机制为下一代处理器更具吸引力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号