首页> 外文会议>IEEE Data Driven Control and Learning Systems Conference >A Deep Reinforcement Learning Approach to the Flexible Flowshop Scheduling Problem with Makespan Minimization
【24h】

A Deep Reinforcement Learning Approach to the Flexible Flowshop Scheduling Problem with Makespan Minimization

机译:一种深度加强学习方法,掌握柔性流量调度问题与MapEspan最小化

获取原文

摘要

Recent work has demonstrated the efficiency of deep reinforcement learning (DRL) in making optimization decisions in complex systems. Compared with other DRL algorithms, the proximal policy optimization (PPO) has higher stability and lower complexity. The typical flexible flowshop scheduling problem (FFSP) with identical parallel machines is an NP-hard problem. This paper is the first case to utilize PPO to solve the problem with makespan minimization. The particular state, action and reward function are designed for the FFSP to follow the Markov property. The efficiency of PPO is evaluated on the wafer pickling instance and random instances with different scales. The results show that PPO can always provide satisfactory solutions within a reasonable computational time.
机译:最近的工作表明了深度加强学习(DRL)在复杂系统中的优化决策方面的效率。与其他DRL算法相比,近端政策优化(PPO)具有更高的稳定性和较低的复杂性。具有相同并行机的典型灵活流程调度问题(FFSP)是NP难题。本文是第一种利用PPO解决MAPESPHAN最小化问题的案例。特定的状态,动作和奖励功能是为FFSP设计的,以遵循Markov属性。 PPO的效率在晶片腌制实例和具有不同尺度的随机实例上进行评估。结果表明,PPO可以在合理的计算时间内始终提供令人满意的解决方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号