首页> 外文期刊>Computer Architecture Letters >B-Fetch: Branch Prediction Directed Prefetching for In-Order Processors
【24h】

B-Fetch: Branch Prediction Directed Prefetching for In-Order Processors

机译:B-Fetch:针对有序处理器的分支预测定向预取

获取原文
获取原文并翻译 | 示例

摘要

Computer architecture is beset by two opposing trends. Technology scaling and deep pipelining have led to high memory access latencies; meanwhile, power and energy considerations have revived interest in traditional in-order processors. In-order processors, unlike their superscalar counterparts, do not allow execution to continue around data cache misses. In-order processors, therefore, suffer a greater performance penalty in the light of the current high memory access latencies. Memory prefetching is an established technique to reduce the incidence of cache misses and improve performance. In this paper, we introduce B-Fetch, a new technique for data prefetching which combines branch prediction based lookahead deep path speculation with effective address speculation, to efficiently improve performance in in-order processors. Our results show that B-Fetch improves performance 38.8% on SPEC CPU2006 benchmarks, beating a current, state-of-the-art prefetcher design at ~ 1/3 the hardware overhead.
机译:计算机体系结构受到两个相反趋势的困扰。技术扩展和深度流水线化导致了较高的内存访问延迟。同时,功率和能源方面的考虑引起了人们对传统有序处理器的兴趣。有序处理器与超标量处理器不同,它不允许执行在数据高速缓存未命中的情况下继续进行。因此,鉴于当前的高内存访问延迟,有序处理器会遭受更大的性能损失。内存预取是一种减少缓存未命中率并提高性能的成熟技术。在本文中,我们介绍了B-Fetch,这是一种用于数据预取的新技术,该技术将基于分支预测的超前深度路径推测与有效地址推测相结合,以有效地提高有序处理器的性能。我们的结果表明,B-Fetch在SPEC CPU2006基准上的性能提高了38.8%,在硬件开销的1/3左右的情况下击败了当前最新的预取器设计。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号