首页> 外文OA文献 >Adaptive cache aware multiprocessor scheduling framework
【2h】

Adaptive cache aware multiprocessor scheduling framework

机译:自适应缓存感知多处理器调度框架

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
机译:计算机资源分配尤其是对于多处理器系统而言是一个巨大的挑战,多处理器系统由要在共同运行者进程和线程之间分配的共享计算资源组成。有效的资源分配将导致高效且稳定的整体多处理器系统和单个线程性能,而无效的较差的资源分配将导致严重的性能瓶颈,即使对于具有高计算资源的系统也是如此。本文提出了一种缓存感知的自适应闭环调度框架,作为解决高动态资源管理问题的一种有效资源分配策略,该算法需要对高度不确定和不可预测的资源模式进行即时估计。已经开发出许多解决这种高度动态的资源分配问题的方法,但是没有很好地考虑资源分配问题的动态性质以及时变和不确定的特征。这些方法有利于静态和动态优化方法或高级调度算法,例如比例公平(PFair)调度算法。其中一些方法考虑了多处理器系统的动态特性,仅适用于基本的闭环系统。因此,他们没有考虑到系统的时变性和不确定性。因此,需要进一步研究多处理器资源分配。我们的闭环缓存感知自适应调度框架通过测量随时间变化的因素(例如缓存未命中数,停顿和指令数)来考虑资源可用性和资源使用模式。更具体地说,使用QR递归最小二乘算法(RLS)和高速缓存未命中计数时间序列统计信息来标识线程的高速缓存使用模式。对于已识别的缓存资源动态,我们的闭环缓存感知自适应调度框架强制执行线程的指令公平性。在我们的研究项目中,公平性被定义为资源分配公平性,它减少了共享资源环境中共同运行线程的依赖性。这样,可以克服由于共享缓存资源冲突导致的指令计数降低。在这方面,我们的闭环缓存感知自适应调度框架在两个主要方面和三个次要方面为研究领域做出了贡献。这两个主要贡献导致了缓存感知调度系统。第一个主要贡献是执行公平算法的开发,该算法降低了联合运行者缓存对线程性能的影响。第二个贡献是开发了相关的数学模型,例如线程执行模式和缓存访问模式模型,它们实际上是根据数学量来制定执行公平性算法的。随着高速缓存感知调度系统的发展,我们的自适应自调整控制框架被构建为向高速缓存感知调度系统添加自适应闭环方面。该控制框架实际上由两个主要组件组成:参数估计器和控制器设计模块。第一个较小的贡献是参数估计器的发展;将QR递归最小二乘(RLS)算法应用到我们的闭环缓存感知自适应调度框架中,以估计线程的高度不确定和时变的缓存资源模式。第二个较小的贡献是控制器设计模块的设计。代数控制器设计算法极点放置用于设计相关的控制器,该控制器能够提供所需的时变控制动作。自适应自调整控制框架和缓存感知调度系统实际上构成了我们最终的框架,闭环缓存感知自适应调度框架。第三个次要贡献是验证这种缓存感知的自适应闭环调度框架在压倒共同运行者缓存依赖性方面的效率。时间序列统计计数器是为M-Sim多核模拟器开发的;并将理论发现和数学公式用作MATLAB m文件软件代码。这样,可以测试整个框架并分析实验结果。根据我们的实验结果,可以得出结论,在高速缓存被充分利用的情况下,我们的闭环高速缓存感知自适应调度框架成功地将依赖联合运行者高速缓存的线程指令数驱动为依赖联合运行者独立的指令数,错误余量高达25%。此外,线程缓存访问模式的估计准确性也达到75%。

著录项

  • 作者

    Arslan Huseyin Gokseli;

  • 作者单位
  • 年度 2011
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号