首页> 外文OA文献 >Evaluation of Cache Management Algorithms for Shared Last Level Caches
【2h】

Evaluation of Cache Management Algorithms for Shared Last Level Caches

机译:共享末级缓存的缓存管理算法评估

摘要

The performance gap between processors and main memory has been growing overthe last decades. Fast memory structures know as caches were introduced to mitigatesome of the effects of this gap. After processor manufacturers reached thelimits of single core processors performance in the early 2000s, multicore processorshave become common. Multicore processors commonly share cache space betweencores, and algorithms that manage access to shared cache structures have becomean important research topic. Many researchers have presented algorithms that aresupposed to improve the performance of multicore processors by modifying cachepolicies. In this thesis, we present and evaluate several recent and important worksin the cache management eld. We present a simulation framework for evaluationof various cache management algorithms, based on the Sniper simulation system.Several of the presented algorithms are implemented; Thread Aware Dynamic InsertionPolicy (TADIP), Dynamic Re-Reference Interval Prediction (DRRIP), UtilityCache Partition (UCP), Promotion/Insertion Pseduo-Partitioning (PIPP), andProbabilistic Shared Cache Management (PriSM). The implemented algorithms areevaluated against the commonly used Least Recently Used (LRU) replacement policyand each other. In addition, we perform ve sensitivity analysis experiments,exploring algorithm sensitivity to changes the simulated architecture. In total datafrom almost 9000 simulation runs is used in our evaluation.Our results suggest that all implemented algorithms mostly perform as goodas or better than LRU in 4-core architectures. In 8- and 16-core architecturessome of the algorithms, especially PIPP, perform worse than LRU. Throughout allour experiments UCP, the oldest of the evaluated alternative to LRU, is the bestperformer with an average performance increase of about 5%. We also show thatUCP performance increases to more than 20% when available cache and memoryresources are reduced.
机译:过去几十年来,处理器和主内存之间的性能差距一直在扩大。引入了称为高速缓存的快速内存结构,以减轻此差距的某些影响。在处理器制造商在2000年代初达到单核处理器性能的极限之后,多核处理器已变得普遍。多核处理器通常在核之间共享高速缓存空间,并且管理对共享高速缓存结构的访问的算法已成为重要的研究主题。许多研究人员提出了通过修改缓存策略来提高多核处理器性能的算法。在本文中,我们介绍并评估了缓存管理领域中的一些近期重要工作。我们提供了一个基于Sniper仿真系统的用于评估各种缓存管理算法的仿真框架。线程感知动态插入策略(TADIP),动态重传参考间隔预测(DRRIP),UtilityCache分区(UCP),升级/插入Pseduo分区(PIPP)和概率共享高速缓存管理(PriSM)。对照常用的最近最少使用(LRU)替换策略以及彼此评估所实现的算法。此外,我们还进行了ve敏感性分析实验,探索算法的敏感性以改变模拟的体系结构。在我们的评估中,总共使用了将近9000次仿真运行的数据。我们的结果表明,在4核体系结构中,所有已实现的算法在性能上通常都优于LRU。在8核和16核架构中,某些算法(尤其是PIPP)的性能比LRU差。在所有的实验中,UCP是LRU评估中最古老的替代产品,是性能最好的产品,平均性能提高了5%。我们还表明,减少可用缓存和内存资源后,UCP性能将提高20%以上。

著录项

  • 作者

    Olsen Runar Bergheim;

  • 作者单位
  • 年度 2015
  • 总页数
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号