首页> 外文会议>International Symposium on Cluster Computing and the Grid >Grid Differentiated Services: a Reinforcement Learning Approach
【24h】

Grid Differentiated Services: a Reinforcement Learning Approach

机译:网格差异化服务:加强学习方法

获取原文

摘要

Large scale production grids are a major case for autonomic computing. Following the classical definition of Kephart, an autonomic computing system should optimize its own behavior in accordance with high level guidance from humans. This central tenet of this paper is that the combination of utility functions and reinforcement learning (RL) can provide a general and efficient method for dynamically allocating grid resources in order to optimize the satisfaction of both end-users and participating institutions. The flexibility of an RL-based system allows to model the state of the grid, the jobs to be scheduled, and the high-level objectives of the various actors on the grid. RL-based scheduling can seam-lessly adapt its decisions to changes in the distributions of inter-arrival time, QoS requirements, and resource availability. Moreover, it requires minimal prior knowledge about the target environment, including user requests and infrastructure. Our experimental results, both on a synthetic workload and a real trace, show that RL is not only a realistic alternative to empirical scheduler design, but is able to outperform them.
机译:大规模生产网格是自主计算的主要案例。在Kephart的经典定义之后,自主计算系统应根据人类的高级指导优化其自己的行为。本文的中央宗旨是,公用事业功能和强化学习(RL)的组合可以提供一种用于动态分配网格资源的一般和有效的方法,以优化最终用户和参与机构的满足感。基于RL的系统的灵活性允许建模网格状态,要调度的作业以及网格上的各种演员的高级目标。基于RL的调度可以接缝 - 轻微调整其对到达间隔时间,QoS要求和资源可用性的分布中的更改的决策。此外,它需要关于目标环境的最低知识,包括用户请求和基础架构。我们在合成工作量和实际迹线上的实验结果表明,RL不仅是对经验调度器设计的现实替代品,而且能够擅长它们。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号