首页> 外文期刊>Concurrency, practice and experience >Task scheduling based on deep reinforcement learning in a cloud manufacturing environment
【24h】

Task scheduling based on deep reinforcement learning in a cloud manufacturing environment

机译:在云制造环境中基于深度强化学习的任务调度

获取原文
获取原文并翻译 | 示例
           

摘要

Cloud manufacturing promotes the transformation of intelligence for the traditional manufacturing mode. In a cloud manufacturing environment, the task scheduling plays an important role. However, as the number of problem instances increases, the solution quality and computation time always go against. Existing task scheduling algorithms can get local optimal solutions with the high computational cost, especially for large problem instances. To tackle this problem, a task scheduling algorithm based on a deep reinforcement learning architecture (RLTS) is proposed to dynamically schedule tasks with precedence relationship to cloud servers to minimize the task execution time. Meanwhile, the Deep-Q-Network, as a kind of deep reinforcement learning algorithms, is employed to consider the problem of complexity and high dimension. In the simulation, the performance of the proposed algorithm is compared with other four heuristic algorithms. The experimental results show that RLTS can be effective to solve the task scheduling in a cloud manufacturing environment.
机译:云制造促进了传统制造模式的智能化转型。在云制造环境中,任务调度起着重要作用。但是,随着问题实例数量的增加,解决方案的质量和计算时间始终会受到不利影响。现有的任务调度算法可以以较高的计算成本获得局部最优解,特别是对于大型问题实例。为了解决这个问题,提出了一种基于深度强化学习架构(RLTS)的任务调度算法,可以动态地将具有优先级关系的任务调度到云服务器,以最大程度地减少任务执行时间。同时,Deep-Q-Network作为一种深度强化学习算法,被用于考虑复杂性和高维性的问题。在仿真中,将所提算法的性能与其他四种启发式算法进行了比较。实验结果表明,RLTS可以有效地解决云制造环境中的任务调度问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号