首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium >Discrete Cache Insertion Policies for Shared Last Level Cache Management on Large Multicores
【24h】

Discrete Cache Insertion Policies for Shared Last Level Cache Management on Large Multicores

机译:在大型多设备上共享最后一次缓存管理的离散缓存插入策略

获取原文

摘要

Multi-core processors employ shared Last Level Caches (LLC). This trend will continue in the future with large multi-core processors (16 cores and beyond) as well. At the same time, the associativity of LLC tends to remain in the order of sixteen. Consequently, with large multicore processors, the number of cores that share the LLC becomes larger than the associativity of the cache itself. LLC management policies have been extensively studied for small scale multi-cores (4 to 8 cores) and associativity degree in the 16 range. However, the impact of LLC management on large multi-cores is essentially unknown, in particular when the associativity degree is smaller than the number of cores. In this study, we introduce Adaptive Discrete and deprioritized Application PrioriTization(ADAPT), an LLC management policy addressing the large multicores where the LLC associativity degree is smaller than the number of cores. ADAPT builds on the use of the Footprint number metric. We propose a monitoring mechanism that dynamically samples cache sets to estimate the Footprint-number of applications and classifies them into discrete (distinct and more than two) priority buckets. The cache replacement policy leverages this classification and assigns priorities to cache lines of applications during cache replacement operations. We further find that de-prioritizing certain applications during cache replacement is beneficial to the overall performance. We evaluate our proposal on 16, 20 and 24-core multi-programmed workloads and discuss other aspects in detail.
机译:多核处理器使用共享最后一次缓存(LLC)。这种趋势将在未来继续进行大型多核处理器(16个核心及以后)。与此同时,LLC的联想往往持续十六次。因此,利用大型多核处理器,共享LLC的核心的数量大于高速缓存本身的关联性。 LLC管理策略已广泛研究小型多核(4至8个核心)和16个范围内的关联度。然而,LLC管理对大型多核的影响基本上是未知的,特别是当关联度小于核心的数量时。在这项研究中,我们引入了适应性离散和Deforized应用优先级(适应),一个LLC管理策略,用于解决LLC关联度小于核心数的大型多设备。适应在使用足迹数度量时构建。我们提出了一种监视机制,动态示例缓存集,以估计应用程序的占用数量,并将它们分类为离散(独特且多个)优先级的铲斗。缓存替换策略利用此分类,并在缓存替换操作期间为缓存应用程序分配优先级。我们进一步发现,在缓存替换期间,将确定某些应用程序有利于整体性能。我们在16,20和24核多程序的工作负载上评估我们的提议并详细讨论其他方面。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号