...
首页> 外文期刊>Applied Energy >Reinforcement learning for optimal control of low exergy buildings
【24h】

Reinforcement learning for optimal control of low exergy buildings

机译:强化学习可对低火用建筑进行最佳控制

获取原文
获取原文并翻译 | 示例
           

摘要

Over a third of the anthropogenic greenhouse gas (GHG) emissions stem from cooling and heating buildings, due to their fossil fuel based operation. Low exergy building systems are a promising approach to reduce energy consumption as well as GHG emissions. They consists of renewable energy technologies, such as PV, PV/T and heat pumps. Since careful tuning of parameters is required, a manual setup may result in sub-optimal operation. A model predictive control approach is unnecessarily complex due to the required model identification. Therefore, in this work we present a reinforcement learning control (RLC) approach. The studied building consists of a PV/T array for solar heat and electricity generation, as well as geothermal heat pumps. We present RLC for the PV/T array, and the full building model. Two methods, Tabular Q-learning and Batch Q-learning with Memory Replay, are implemented with real building settings and actual weather conditions in a Matlab/Simulink framework. The performance is evaluated against standard rule-based control (RBC). We investigated different neural network structures and find that some outperformed RBC already during the learning phase. Overall, every RLC strategy for PV/T outperformed RBC by over 10% after the third year. Likewise, for the full building, RLC outperforms RBC in terms of meeting the heating demand, maintaining the optimal operation temperature and compensating more effectively for ground heat. This allows to reduce engineering costs associated with the setup of these systems, as well as decrease the return-of-invest period, both of which are necessary to create a sustainable, zero-emission building stock. (C) 2015 Elsevier Ltd. All rights reserved.
机译:由于基于化石燃料的运营,超过三分之一的人为温室气体(GHG)排放来自于建筑物的制冷和供暖。低火用建筑系统是减少能耗和温室气体排放的有前途的方法。它们包括可再生能源技术,例如PV,PV / T和热泵。由于需要仔细调整参数,因此手动设置可能会导致操作欠佳。由于需要模型识别,模型预测控制方法不必要地复杂。因此,在这项工作中,我们提出一种强化学习控制(RLC)方法。研究的建筑物由用于太阳能和发电的PV / T阵列以及地热热泵组成。我们介绍了用于PV / T阵列的RLC,以及完整的建筑模型。在Matlab / Simulink框架中,通过真实的建筑物设置和实际的天气情况,实现了两种方法,即表格Q学习和带内存重放的批量Q学习。根据标准的基于规则的控制(RBC)评估性能。我们研究了不同的神经网络结构,发现在学习阶段,某些神经网络的性能已经超过了RBC。总体而言,第三年后,每项针对PV / T的RLC策略均优于RBC超过10%。同样,对于整个建筑物,RLC在满足供暖需求,保持最佳工作温度并更有效地补偿地热方面优于RBC。这样可以减少与这些系统的设置相关的工程成本,并减少投资回报期,这两者对于创建可持续的零排放建筑群都是必不可少的。 (C)2015 Elsevier Ltd.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号