首页> 外文会议>Annual IEEE/ACM International Symposium on Microarchitecture >The application slowdown model: Quantifying and controlling the impact of inter-application interference at shared caches and main memory
【24h】

The application slowdown model: Quantifying and controlling the impact of inter-application interference at shared caches and main memory

机译:应用程序减慢模型:量化和控制应用程序间干扰对共享缓存和主内存的影响

获取原文

摘要

In a multi-core system, interference at shared resources (such as caches and main memory) slows down applications running on different cores. Accurately estimating the slowdown of each application has several benefits: e.g., it can enable shared resource allocation in a manner that avoids unfair application slowdowns or provides slowdown guarantees. Unfortunately, prior works on estimating slowdowns either lead to inaccurate estimates, do not take into account shared caches, or rely on a priori application knowledge. This severely limits their applicability. In this work, we propose the Application Slowdown Model (ASM), a new technique that accurately estimates application slowdowns due to interference at both the shared cache and main memory, in the absence of a priori application knowledge. ASM is based on the observation that the performance of each application is strongly correlated with the rate at which the application accesses the shared cache. Thus, ASM reduces the problem of estimating slowdown to that of estimating the shared cache access rate of the application had it been run alone on the system. To estimate this for each application, ASM periodically 1) minimizes interference for the application at the main memory, 2) quantifies the interference the application receives at the shared cache, in an aggregate manner for a large set of requests. Our evaluations across 100 workloads show that ASM has an average slowdown estimation error of only 9.9%, a 2.97× improvement over the best previous mechanism. We present several use cases of ASM that leverage its slowdown estimates to improve fairness, performance and provide slowdown guarantees. We provide detailed evaluations of three such use cases: slowdown-aware cache partitioning, slowdown-aware memory bandwidth partitioning and an example scheme to provide soft slowdown guarantees. Our evaluations show that these new schemes perform significantly better than state-of-the-art cache partitioning and memory scheduling schemes.
机译:在多核系统中,对共享资源(例如缓存和主内存)的干扰会减慢在不同核上运行的应用程序的速度。准确估计每个应用程序的速度有几个好处:例如,它可以以避免不公平的应用程序速度下降或提供速度下降保证的方式启用共享资源分配。不幸的是,先前在估算速度方面的工作要么导致估算不准确,要么不考虑共享缓存,要么依靠先验应用知识。这严重限制了它们的适用性。在这项工作中,我们提出了应用程序减慢模型(ASM),这是一种在没有先验应用程序知识的情况下,准确估计由于共享缓存和主内存的干扰而导致的应用程序减慢的新技术。 ASM基于以下观察:每个应用程序的性能与应用程序访问共享缓存的速率密切相关。因此,如果仅在系统上运行应用程序,则ASM将估计速度的问题减少到估计应用程序的共享缓存访问率的问题。为了针对每个应用程序对此进行估计,ASM定期(1)以汇总的方式针对大量请求对主存储器上的应用程序进行最小化的干扰,2)量化应用程序在共享缓存中收到的干扰。我们对100个工作负载的评估表明,ASM的平均减速估计误差仅为9.9%,比以前的最佳机制提高了2.97倍。我们介绍了ASM的几个用例,它们利用其减慢估算值来改善公平性,性能并提供减慢保证。我们提供了三种此类用例的详细评估:知道减速的缓存分区,知道减速的内存带宽分区以及提供软减速保证的示例方案。我们的评估表明,这些新方案的性能明显优于最新的高速缓存分区和内存调度方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号