...
首页> 外文期刊>Computers & Chemical Engineering >A deep reinforcement learning approach for chemical production scheduling
【24h】

A deep reinforcement learning approach for chemical production scheduling

机译:化学生产调度的深增强学习方法

获取原文
获取原文并翻译 | 示例
           

摘要

This work examines applying deep reinforcement learning to a chemical production scheduling process to account for uncertainty and achieve online, dynamic scheduling, and benchmarks the results with a mixed-integer linear programming (MILP) model that schedules each time interval on a receding horizon basis. An industrial example is used as a case study for comparing the differing approaches. Results show that the reinforcement learning method outperforms the naive MILP approaches and is competitive with a shrinking horizon MILP approach in terms of profitability, inventory levels, and customer service. The speed and flexibility of the reinforcement learning system is promising for achieving real-time optimization of a scheduling system, but there is reason to pursue integration of data-driven deep reinforcement learning methods and model-based mathematical optimization approaches.
机译:这项工作审查了将深度加强学习应用于化学生产调度过程,以解释不确定性,并通过混合整数线性编程(MILP)模型来实现在线,动态调度和基准测试,这些模型在后退地平线的基础上调度每次间隔。使用工业例子作为比较不同方法的案例研究。结果表明,加固学习方法优于天真的MILP方法,并在盈利,库存水平和客户服务方面具有缩小的地平线MILP方法。增强学习系统的速度和灵活性是为了实现调度系统的实时优化,但有理由追求数据驱动的深度加强学习方法和基于模型的数学优化方法的集成。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号