首页> 美国卫生研究院文献>ACS Central Science >Optimizing Chemical Reactions with Deep Reinforcement Learning
【2h】

Optimizing Chemical Reactions with Deep Reinforcement Learning

机译:通过深度强化学习优化化学反应

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.
机译:采用深度强化学习来优化化学反应。我们的模型反复记录化学反应的结果,并选择新的实验条件以改善反应结果。通过在仿真和实际反应上减少71%的步骤,该模型优于最新的黑箱优化算法。此外,我们通过从某些概率分布中提取反应条件,引入了一种有效的探索策略,与确定性策略相比,将遗憾从0.062改善到0.039。结合有效的探索策略和加速的微滴反应,在30分钟内确定了所考虑的四个反应的最佳反应条件,并更好地了解了控制微滴反应的因素。此外,我们的模型在训练具有相似或什至不相似的潜在机制的反应后表现出更好的性能,这证明了其学习能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号