...
首页> 外文期刊>Journal of Circuits, Systems, and Computers >REDUCING CACHE HIERARCHY ENERGY CONSUMPTION BY PREDICTING FORWARDING AND DISABLING ASSOCIATIVE SETS
【24h】

REDUCING CACHE HIERARCHY ENERGY CONSUMPTION BY PREDICTING FORWARDING AND DISABLING ASSOCIATIVE SETS

机译:通过预测关联集和关联集的减少来减少缓存层次结构能耗

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

The first level data cache in modern processors has become a major consumer of energy due to its increasing size and high frequency access rate. In order to reduce this high energy consumption, we propose in this paper a straightforward filtering technique based on a highly accurate forwarding predictor. Specifically, a simple structure predicts whether a load instruction will obtain its corresponding data via forwarding from the load-store structure - thus avoiding the data cache access - or if it will be provided by the data cache. This mechanism manages to reduce the data cache energy consumption by an average of 21.5% with a negligible performance penalty of less than 0.1%. Furthermore, in this paper we focus on the cache static energy consumption too by disabling a portion of sets of the L2 associative cache. Overall, when merging both proposals, the combined L1 and L2 total energy consumption is reduced by an average of 29.2% with a performance penalty of just 0.25%.
机译:现代处理器中的一级数据高速缓存由于其尺寸增大和高频访问速率而成为能源的主要消耗者。为了减少这种高能耗,我们在本文中提出了一种基于高精度转发预测变量的简单过滤技术。具体而言,一个简单的结构可以预测加载指令是否将通过从加载存储结构进行转发来获取其相应的数据-从而避免了对数据高速缓存的访问-还是由数据高速缓存提供该指令。该机制设法将数据高速缓存能耗平均降低21.5%,而性能损失却可以忽略不计,低于0.1%。此外,在本文中,我们也通过禁用L2关联缓存的一部分集合来关注缓存的静态能耗。总体而言,合并这两个建议时,L1和L2的总能耗平均降低了29.2%,而性能损失仅为0.25%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号