首页> 外文期刊>energies >Environment-Friendly Power Scheduling Based on Deep Contextual Reinforcement Learning
【24h】

Environment-Friendly Power Scheduling Based on Deep Contextual Reinforcement Learning

机译:基于深度情境强化学习的环保型电力调度

获取原文
获取原文并翻译 | 示例

摘要

A novel approach to power scheduling is introduced, focusing on minimizing both economic and environmental impacts. This method utilizes deep contextual reinforcement learning (RL) within an agent-based simulation environment. Each generating unit is treated as an independent, heterogeneous agent, and the scheduling dynamics are formulated as Markov decision processes (MDPs). The MDPs are then used to train a deep RL model to determine optimal power schedules. The performance of this approach is evaluated across various power systems, including both small-scale and large-scale systems with up to 100 units. The results demonstrate that the proposed method exhibits superior performance and scalability in handling power systems with a larger number of units.
机译:引入了一种新的电力调度方法,重点是最大限度地减少对经济和环境的影响。该方法在基于智能体的模拟环境中利用深度情境强化学习 (RL)。每个发电机组都被视为一个独立的异构代理,调度动力学被表述为马尔可夫决策过程(MDP)。然后,MDP 用于训练深度 RL 模型,以确定最佳电源计划。这种方法在各种电力系统中的性能进行评估,包括多达 100 个单元的小型和大型系统。结果表明,所提方法在处理单元数量较多的电力系统时表现出优异的性能和可扩展性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号