首页> 外文期刊>Applied Soft Computing >Using IDS fitted Q to develop a real-time adaptive controller for dynamic resource provisioning in Cloud's virtualized environment
【24h】

Using IDS fitted Q to develop a real-time adaptive controller for dynamic resource provisioning in Cloud's virtualized environment

机译:使用适合IDS的Q开发用于云虚拟化环境中动态资源配置的实时自适应控制器

获取原文
获取原文并翻译 | 示例
           

摘要

Reinforcement learning (RL) is a powerful solution to adaptive control when no explicit model exists for the system being controlled. To handle uncertainty along with the lack of explicit model for the Cloud's resource management systems, this paper utilizes continuous RL in order to provide an intelligent control scheme for dynamic resource provisioning in the spot market of the Cloud's computational resources. On the other hand, the spot market of computational resources inside Cloud is a real-time environment in which, from the RL point of view, the control task of dynamic resource provisioning requires defining continuous domains for (state, action) pairs. Commonly, function approximation is used in RL controllers to overcome continuous requirements of (state, action) pair remembrance and to provide estimates for unseen statuses. However, due to the computational complexities of approximation techniques like neural networks, RL is almost impractical for real-time applications. Thus, in this paper, Ink Drop Spread (IDS) modeling method, which is a solution to system modeling without dealing with heavy computational complexities, is used as the basis to develop an adaptive controller for dynamic resource provisioning in Cloud's virtualized environment. The performance of the proposed control mechanism is evaluated through measurement of job rejection rate and capacity waste. The results show that at the end of the training episodes, in 90 days, the controller learns to reduce job rejection rate down to 0% while capacity waste is optimized down to 11.9%.
机译:当被控制系统不存在显式模型时,强化学习(RL)是自适应控制的强大解决方案。为了处理不确定性以及缺少Cloud资源管理系统的显式模型,本文利用连续RL,以便为Cloud计算资源现货市场中的动态资源供应提供智能控制方案。另一方面,云内部的计算资源现货市场是一个实时环境,从RL的角度来看,动态资源供应的控制任务需要为(状态,动作)对定义连续的域。通常,RL控制器中使用函数逼近来克服对(状态,动作)对记忆的连续要求,并为看不见的状态提供估计。但是,由于近似技术(如神经网络)的计算复杂性,RL对于实时应用几乎是不切实际的。因此,在本文中,墨滴扩散(IDS)建模方法是一种无需处理繁重的计算复杂性的系统建模解决方案,它被用作在云虚拟化环境中开发用于动态资源供应的自适应控制器的基础。通过测量工作拒绝率和能力浪费来评估所提出的控制机制的性能。结果表明,在培训阶段结束后的90天内,控制器学会了将工作拒绝率降低到0%,而能力浪费被优化到了11.9%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号