...
首页> 外文期刊>Smart Grid, IEEE Transactions on >Two-Stage Deep Reinforcement Learning for Inverter-Based Volt-VAR Control in Active Distribution Networks
【24h】

Two-Stage Deep Reinforcement Learning for Inverter-Based Volt-VAR Control in Active Distribution Networks

机译:基于逆变器的伏瓦控制在主动配送网络中的两级深度加固学习

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Model-based Vol/VAR optimization method is widely used to eliminate voltage violations and reduce network losses. However, the parameters of active distribution networks(ADNs) are not onsite identified, so significant errors may be involved in the model and make the model-based method infeasible. To cope with this critical issue, we propose a novel two-stage deep reinforcement learning (DRL) method to improve the voltage profile by regulating inverter-based energy resources, which consists of offline stage and online stage. In the offline stage, a highly efficient adversarial reinforcement learning algorithm is developed to train an offline agent robust to the model mismatch. In the sequential online stage, we transfer the offline agent safely as the online agent to perform continuous learning and controlling online with significantly improved safety and efficiency. Numerical simulations on IEEE test networks not only demonstrate that the proposed adversarial reinforcement learning algorithm outperforms the state-of-art algorithm, but also show that our proposed two-stage method achieves much better performance than the existing DRL based methods in the online application.
机译:基于模型的Vol / VAR优化方法广泛用于消除电压违规,降低网络损耗。然而,活动分配网络(ADN)的参数不是现场识别的,因此模型可能涉及显着的错误并使基于模型的方法不可行。要应对这一关键问题,我们提出了一种新颖的两级深度加强学习(DRL)方法来通过调节基于逆变器的能源来改善电压型材,该能源包括离线阶段和在线阶段。在离线阶段,开发了一种高效的对抗性加强学习算法,以训练离线代理到模型不匹配。在顺序在线阶段,我们安全地将脱机代理作为在线代理商,以便在线进行连续学习和控制,具有显着提高的安全性和效率。 IEEE测试网络的数值模拟不仅证明了所提出的对抗性加强学习算法优于最先进的算法,而且还表明我们所提出的两级方法比在线应用程序中的现有DRL的方法实现更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号