首页> 外文学位 >Genetic reinforcement learning approach to the heterogeneous machine scheduling problem.
【24h】

Genetic reinforcement learning approach to the heterogeneous machine scheduling problem.

机译:遗传强化学习方法解决异构机器调度问题。

获取原文
获取原文并翻译 | 示例

摘要

This research focuses on the development of a learning-based heuristic for scheduling heterogeneous machines. Although list scheduling methods have been widely used for a large class of scheduling problems, including the heterogeneous machine scheduling problem, they involve designing priority rules, which usually require a fair amount of insights on the characteristics of the problem to be solved. Instead of elaborate design of priority rules in a single step, we propose an iterative list scheduling, which refines priority rules while generating a number of schedules. The proposed iterative list scheduling is formulated as a reinforcement learning problem, with states and actions defined in list scheduling. Due to the large number of possible states, reinforcement learning algorithms which use value functions in constructing an optimal policy may not be suitable for scheduling problems. Thus, to directly work with policies rather than the values of states, we propose genetic reinforcement learning (GRL), in which the policies of reinforcement learning are encoded into the chromosomes of genetic algorithms and a near-optimal policy is searched for by genetic algorithms. A GRL-based scheduler, called EVIS (EVolutionary Intracell Scheduler), has been developed and applied to various scheduling problems such as the heterogeneous machine scheduling, the processor scheduling, the job-shop scheduling, the flow-shop scheduling, and the open-shop scheduling problems. The proposed model of EVIS, which has a linear order of population-fitness convergence, is verified by computer experiments. Even without fine tuning EVIS, the quality of solutions achieved by EVIS is comparable to that of problem-tailored heuristics for most of the problem instances.
机译:这项研究的重点是用于调度异构机器的基于学习的启发式算法的开发。尽管列表调度方法已广泛用于包括异构机器调度问题在内的各种调度问题,但它们涉及设计优先级规则,通常需要对要解决的问题的特征有相当多的了解。与其在一个步骤中不精心设计优先级规则,我们提出了一个迭代列表调度,它在生成大量调度时细化了优先级规则。拟议的迭代列表调度被公式化为强化学习问题,并在列表调度中定义了状态和动作。由于存在大量可能的状态,使用价值函数构造最优策略的强化学习算法可能不适用于调度问题。因此,为了直接使用策略而不是状态值,我们提出了遗传强化学习(GRL),其中将强化学习的策略编码到遗传算法的染色体中,然后通过遗传算法搜索接近最优的策略。已开发出一种基于GRL的调度程序,称为EVIS(EVolutionary单元内调度程序),该调度程序已应用于各种调度问题,例如异构机器调度,处理器调度,作业车间调度,流水车间调度和开放式调度。店铺调度问题。通过计算机实验验证了所提出的EVIS模型具有总体适应度收敛的线性顺序。即使没有对EVIS进行微调,对于大多数问题实例,EVIS实现的解决方案的质量也可以与针对问题的启发式方法相媲美。

著录项

  • 作者

    Kim, Gyoung Hwan.;

  • 作者单位

    Purdue University.;

  • 授予单位 Purdue University.;
  • 学科 Engineering Electronics and Electrical.;Computer Science.;Operations Research.
  • 学位 Ph.D.
  • 年度 1997
  • 页码 173 p.
  • 总页数 173
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号