首页> 外文期刊>Knowledge-Based Systems >Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications
【24h】

Rule-based reinforcement learning methodology to inform evolutionary algorithms for constrained optimization of engineering applications

机译:基于规则的强化学习方法,以通知进化算法,以了解工程应用的约束优化

获取原文
获取原文并翻译 | 示例
       

摘要

For practical engineering optimization problems, the design space is typically narrow, given all the real-world constraints. Reinforcement Learning (RL) has commonly been guided by stochastic algorithms to tune hyperparameters and leverage exploration. Conversely in this work, we propose a rule-based RL methodology to guide evolutionary algorithms (EA) in constrained optimization. First, RL proximal policy optimization agents are trained to master matching some of the problem rules/constraints, then RL is used to inject experiences to guide various evolutionary/stochastic algorithms such as genetic algorithms, simulated annealing, particle swarm optimization, differential evolution, and natural evolution strategies. Accordingly, we develop RL-guided EAs, which are benchmarked against their standalone counterparts. RL-guided EA in continuous optimization demonstrates significant improvement over standalone EA for two engineering benchmarks. The main problem analyzed is nuclear fuel assembly combinatorial optimization with high-dimensional and computationally expensive physics. The results demonstrate the ability of RL to efficiently learn the rules that nuclear fuel engineers follow to realize candidate solutions. Without these rules, the design space is large for RL/EA to find many candidates. With imposing the rule-based RL methodology, we found that RL-guided EA outperforms standalone algorithms by a wide margin, with 10 times improvement in exploration capabilities and computational efficiency. These insights imply that when facing a constrained problem with numerous local optima, RL can be useful in focusing the search space in the areas where expert knowledge has demonstrated merit, while evolutionary/stochastic algorithms utilize their exploratory features to improve the number of feasible solutions. Published by Elsevier B.V.
机译:对于实际工程优化问题,设计空间通常是狭窄的,鉴于所有真实的约束。强化学习(RL)通常由随机算法引导,以调整HyperParameters并利用勘探。相反,在这项工作中,我们提出了一种基于规则的RL方法,以指导受约束优化中的进化算法(EA)。首先,RL近端策略优化代理训练以匹配一些问题规则/约束,然后RL用于注入经验,以指导各种进化/随机算法,例如遗传算法,模拟退火,粒子群优化,差分演化和差分进化自然演进策略。因此,我们开发RL-Guided EA,这与独立的同行有基准测试。连续优化中的RL-Goided EA显着改善了两个工程基准的独立EA。分析的主要问题是核燃料组合组合优化,具有高维和计算昂贵的物理学。结果展示了RL有效地学习核燃料工程师遵循以实现候选解决方案的规则的能力。如果没有这些规则,则设计空间很大,用于找到许多候选人。通过强加基于规则的RL方法,我们发现RL-Goided EA优于独立的算法,通过宽边缘,具有&勘探能力和计算效率的提高10倍。这些见解暗示,当面对众多本地Optima的受约束性问题时,RL可以将搜索空间聚焦在专家知识已经展示了优点的区域,而进化/随机算法利用其探索性功能来提高可行解决方案的数量。 elsevier b.v出版。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号