...
首页> 外文期刊>IET Smart Grid >Decentralised demand response market model based on reinforcement learning
【24h】

Decentralised demand response market model based on reinforcement learning

机译:基于强化学习的分散需求响应市场模型

获取原文
获取原文并翻译 | 示例
           

摘要

A new decentralised demand response (DR) model relying on bi-directional communications is developed in this study. In this model, each user is considered as an agent that submits its bids according to the consumption urgency and a set of parameters defined by a reinforcement learning algorithm called Q-learning. The bids are sent to a local DR market, which is responsible for communicating all bids to the wholesale market and the system operator (SO), reporting to the customers after determining the local DR market clearing price. From local markets’ viewpoint, the goal is to maximise social welfare. Four DR levels are considered to evaluate the effect of different DR portions in the cost of the electricity purchase. The outcomes are compared with the ones achieved from a centralised approach (aggregation-based model) as well as an uncontrolled method. Numerical studies prove that the proposed decentralised model remarkably drops the electricity cost compare to the uncontrolled method, being nearly as optimal as a centralised approach.
机译:本研究开发了依赖双向通信的新分散需求响应(DR)模型。在该模型中,每个用户被认为是根据消耗紧急性和由称为Q-Learning的加强学习算法定义的一组参数来提交其出价的代理。投标将被送到当地的DR市场,该市场负责将所有投标传送到批发市场和系统运营商(SO),在确定当地博士市场清算价格后向客户报告。从当地市场的观点来看,目标是最大限度地提高社会福利。四个DR水平被认为是评估不同DR部分以电力购买成本的影响。将结果与由集中方法(基于聚合的模型)和不受控制的方法相比的结果。数值研究证明,所提出的分散模型显着降低了与不受控制的方法相比的电力成本,几乎是作为一种集中方法的最佳方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号