首页> 外文期刊>Buildings >Building Energy Consumption Raw Data Forecasting Using Data Cleaning and Deep Recurrent Neural Networks
【24h】

Building Energy Consumption Raw Data Forecasting Using Data Cleaning and Deep Recurrent Neural Networks

机译:使用数据清理和深度递归神经网络的建筑能耗原始数据预测

获取原文
           

摘要

With the rising focus on building energy big data analysis, there lacks a framework for raw data preprocessing to answer the question of how to handle the missing data in the raw data set. This study presents a methodology and framework for building energy consumption raw data forecasting. A case building is used to forecast the energy consumption by using deep recurrent neural networks. Four different methodologies to impute missing data in the raw data set are compared and implemented. The question of sensitivity of gap size and available data percentage on the imputation accuracy was tested. The cleaned data were then used for building energy forecasting. While the existing studies explored only the use of small recurrent networks of 2 layers and less, the question of whether a deep network of more than 2 layers would be performing better for building energy consumption forecasting should be explored. In addition, the problem of overfitting has been cited as a significant problem in using deep networks. In this study, the deep recurrent neural network is then used to explore the use of deeper networks and their regularization in the context of an energy load forecasting task. The results show a mean absolute error of 2.1 can be achieved through the 2*32 gated neural network model. In applying regularization methods to overcome model overfitting, the study found that weights regularization did indeed delay the onset of overfitting.
机译:随着对建筑能源大数据分析的日益关注,缺乏用于预处理原始数据的框架来回答如何处理原始数据集中的缺失数据的问题。这项研究提出了一种建筑能耗原始数据预测的方法和框架。案例构建可通过使用深度递归神经网络来预测能耗。比较并实现了四种不同的方法来估算原始数据集中的缺失数据。测试了间隙大小的敏感性和可用数据百分比对插补精度的影响。然后将清洗后的数据用于建筑能耗预测。尽管现有研究仅探讨了使用2层及以下的小型递归网络,但应探讨2层以上的深层网络对于建筑物能耗预测是否表现更好的问题。另外,过拟合问题已被引用为使用深度网络的重要问题。在这项研究中,然后使用深度递归神经网络来探索更深层网络的使用及其在能源负荷预测任务中的正则化。结果表明,通过2 * 32门控神经网络模型可以实现2.1的平均绝对误差。在应用正则化方法克服模型过度拟合的过程中,研究发现权重正则化确实确实延迟了过度拟合的发生。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号