...
首页> 外文期刊>Microprocessors and microsystems >On The Performance Benefits Of Sharing And Privatizing Second And Third-level Cache Memories In Homogeneous Multi-core Architectures
【24h】

On The Performance Benefits Of Sharing And Privatizing Second And Third-level Cache Memories In Homogeneous Multi-core Architectures

机译:同类多核体系结构中共享和私有化第二级和第三级缓存内存的性能优势

获取原文
获取原文并翻译 | 示例

摘要

The benefits and deficiencies of shared and private caches have been identified by researchers. The performance impact of privatizing or sharing caches on homogeneous multi-core architectures is less understood. This paper investigates the performance impact of cache sharing on a homogeneous same-ISA 16-core processor with private first-level (L1) caches by considering 3 cache models which vary the sharing property of second-level (L2) and third-level (13) cache banks. It is observed that across many scenarios, the cache privatization's average memory access time improved as the L1 cache miss rate increased and/or the cross-partition interconnect latencies increased. Under uniform memory address distribution, and when the L3 cache miss rate is close to 0, privatizing both L2s and L3s performs best among the 3 cache models. Furthermore, we mathematically demonstrate that when the interconnects bridge latency is below 264 cycles, privatizing L2 caches beats privatizing both L2 and L3 caches, while the reverse is true for large bridge latencies representing high-traffic and heavy workload applications. For large interconnect delays, the private L2 and L3 model is best. For low to moderate interconnect latencies, and when the L3 miss rate is not close to 0, sharing both L2 and L3 banks among all cores performs best followed by privatizing L2s, while privatizing both L2s and L3s ranks last. Under worst case address distributions, cache privatizing benefits generally increase, and with large bridge latencies, privatizing L2 and L3 banks outperforms the other cache models. This reveals that as application workloads become heavier with time, resulting in large cache miss rates and long bridge and interconnect delays, privatizing L2 and L3 caches may prove beneficial. Under less stressful workloads, sharing both L2 and L3 caches have the upper hand. This study confirms the desired configurability and flexibility of the cache memory's sharing degree based on the running workload.
机译:研究人员已经确定了共享缓存和私有缓存的优缺点。对同类多核架构上的私有化或共享缓存的性能影响了解得很少。本文通过考虑3个不同的缓存模型来研究缓存共享对具有私有第一层(L1)缓存的同质相同ISA 16核处理器的性能影响,这些模型会改变第二层(L2)和第三层( 13)缓存库。可以看到,在许多情况下,随着L1缓存未命中率的增加和/或跨分区互连等待时间的增加,缓存私有化的平均内存访问时间得到了改善。在统一的内存地址分配下,并且当L3缓存未命中率接近0时,私有化L2和L3在三种缓存模型中表现最佳。此外,我们从数学上证明,当互连桥延迟低于264个周期时,私有L2缓存优于私有L2和L3缓存,而对于代表高流量和繁重工作负载应用的大型桥接延迟则相反。对于较大的互连延迟,最好使用私有L2和L3模型。对于低到中等的互连延迟,并且当L3未命中率不接近0时,在所有内核之间共享L2和L3 bank的效果最佳,其次是私有化L2,而私有化L2和L3的排名则排在最后。在最坏的情况下,地址分配通常会增加缓存私有化的好处,并且在较大的桥接延迟下,对L2和L3库进行私有化的性能要优于其他缓存模型。这表明随着应用程序工作负载随时间变得越来越重,导致高速缓存未命中率高以及桥接和互连延迟长,私有化L2和L3高速缓存可能被证明是有益的。在工作压力较小的情况下,共享L2和L3缓存将占上风。这项研究证实了基于正在运行的工作负载所需的高速缓存共享程度的可配置性和灵活性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号