首页> 外文期刊>Smart Grid, IEEE Transactions on >Deep Reinforcement Learning for Demand Response in Distribution Networks
【24h】

Deep Reinforcement Learning for Demand Response in Distribution Networks

机译:分销网络中需求响应的深增强学习

获取原文
获取原文并翻译 | 示例

摘要

Load aggregators can use demand response programs to motivate residential users toward reducing electricity demand during peak time periods. This article proposes a demand response algorithm for residential users, while accounting for uncertainties in the load demand and electricity price, users' privacy concerns, and power flow constraints imposed by the distribution network. To address the uncertainty issues, we develop a deep reinforcement learning (DRL) algorithm using an actor-critic method. We apply federated learning to enable users to determine the neural network parameters in a decentralized fashion without sharing private information (e.g., load demand, users' potential discomfort due to load scheduling). To tackle the nonconvex power flow constraints, we apply convex relaxation and transform the problem of updating the neural network parameters into a sequence of semidefinite programs (SDPs). Simulations on an IEEE 33-bus test feeder with 32 households show that the proposed demand response algorithm can reduce the peak load by 33% and the expected cost of each user by 13%. Also, we demonstrate the scalability of the proposed algorithm in 330-bus and 1650-bus feeders with real-time pricing scheme.
机译:负载聚合器可以使用需求响应程序激励住宅用户在高峰期间降低电力需求。本文提出了住宅用户需求响应算法,同时占负载需求和电力价格的不确定性,用户的隐私问题和分销网络施加的电流限制。为了解决不确定性问题,我们使用演员 - 批评方法开发一个深度加强学习(DRL)算法。我们应用联合学习,使用户能够以分散的方式确定神经网络参数,而无需共享私人信息(例如,负载需求,由于负载调度,用户可能的潜在不适。为了解决非透露功率流量约束,我们应用凸松弛并将内神经网络参数更新为一系列SemideFinite程序(SDP)。 IEEE 33公交车测试馈线的模拟,32户,表明所提出的需求响应算法可以将峰值负荷降低33%,每个用户的预期成本降低13%。此外,我们展示了具有实时定价方案的330总线和1650母线馈线中提出的算法的可扩展性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号