首页> 外文会议>CMG conference >Fair Share Modeling for Large Systems: Aggregation, Hierarchical Decomposition and Randomization
【24h】

Fair Share Modeling for Large Systems: Aggregation, Hierarchical Decomposition and Randomization

机译:大型系统的公平份额建模:聚合,分层分解和随机化

获取原文

摘要

HP, IBM and Sun each offer fair share scheduling packages on their UNIX platforms, so that customers can manage the performance of multiple workloads by allocating shares of system resources among the workloads. A good model can help you use such a package effectively. Since the target systems are potentially large, a model is useful only if we have a scalable algorithm to analyze it. In this paper we discuss three approaches to solving the scalability problem, based on aggregation, hierarchical decomposition and randomization. We then compare our scalable algorithms to each other and to the existing expensive exact solution that runs in time proportional to n! for an n workload model.
机译:惠普,IBM和Sun各自在其UNIX平台上提供公平的份额调度程序包,以便客户可以通过在工作负载之间分配系统资源的份额来管理多个工作负载的性能。一个好的模型可以帮助您有效地使用这样的程序包。由于目标系统可能很大,因此仅当我们具有可扩展的算法来分析模型时,模型才有用。在本文中,我们讨论了基于聚合,分层分解和随机化的三种解决可伸缩性问题的方法。然后,我们将可伸缩算法相互比较,并与按时间成正比地运行的现有昂贵的精确解决方案进行比较!对于n个工作负载模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号