【24h】

Path-based next trace prediction

机译:基于路径的下一轨迹预测

获取原文

摘要

The trace cache has been proposed as a mechanism for providing increased fetch bandwidth by allowing the processor to fetch across multiple branches in a single cycle. But to date predicting multiple branches per cycle has meant paying a penalty in prediction accuracy. We propose a next trace predictor that treats the traces as basic units and explicitly predicts sequences of traces. The predictor collects histories of trace sequences (paths) and makes predictions based on these histories. The basic predictor is enhanced to a hybrid configuration that reduces performance losses due to cold starts and aliasing in the prediction table. The Return History Stack is introduced to increase predictor performance by saving path history information across procedure call/returns. Overall, the predictor yields about a 26% reduction in misprediction rates when compared with the most aggressive previously proposed, multiple-branch-prediction methods.
机译:跟踪高速缓存已被提出为一种通过允许处理器在单个周期内跨多个分支进行访存来提供增加的访存带宽的机制。但是到目前为止,预测每个周期有多个分支意味着要付出预测准确性的代价。我们提出了下一个轨迹预测器,该轨迹预测器将轨迹视为基本单位并显式预测轨迹序列。预测器收集跟踪序列(路径)的历史记录,并根据这些历史记录进行预测。基本预测器已增强为混合配置,可减少由于冷启动和预测表中的混叠而导致的性能损失。引入了“返回历史记录堆栈”以通过保存跨过程调用/返回的路径历史记录信息来提高预测器性能。总体而言,与先前提出的最具攻击性的多分支预测方法相比,预测器的错误预测率降低了约26%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号