首页> 外文期刊>Parallel Computing >Predicting power consumption of GPUs with fuzzy wavelet neural networks
【24h】

Predicting power consumption of GPUs with fuzzy wavelet neural networks

机译:用模糊小波神经网络预测GPU的功耗

获取原文
获取原文并翻译 | 示例

摘要

Prediction and optimization of power consumption have become an essential issue in the field of General-purpose computing on graphic processing units (GPUs) because of the increasing prevalence of GPUs and the constraints of energy consumption. However, previous approaches to build power models need to extract program features related to power consumption from the performance counter or GPU emulators. These approaches are unable to estimate power consumption of applications for software designers owing to the lack of information about detailed GPUs architecture or the supporting of emulators. In this study, we explore a novel method to model GPU power consumption of applications in computing process. By using program slicing, we decompose the source code of applications into slices and extract the power-related static program features. The slicing is used as a basic unit to train a power model based on fuzzy wavelet artificial neural networks. This step allows programmers to investigate the power profile of their applications and identify the code areas with higher energy consumption. To improve prediction accuracy, we further divide the GPUs applications by the branch structure into two categories: sparseness-branch and denseness-branch. The power model is proposed for sparseness-branch programs based on slicing. For denseness-branch programs, probabilistic slicing is utilized to reduce the invalid slices in order to improve accuracy. The models are empirically validated by using typical GPU benchmarks and the results are compared with the measured power. Overall, the average error of our power models is less than 6%. (C) 2015 Elsevier B.V. All rights reserved.
机译:功耗的预测和优化已成为图形处理单元(GPU)上通用计算领域中的重要问题,这是因为GPU的普及率不断提高以及能耗的限制。但是,以前构建功耗模型的方法需要从性能计数器或GPU模拟器中提取与功耗相关的程序功能。由于缺少有关详细GPU架构或仿真器支持的信息,这些方法无法为软件设计人员估计应用程序的功耗。在这项研究中,我们探索了一种在计算过程中对应用程序的GPU功耗建模的新颖方法。通过使用程序切片,我们将应用程序的源代码分解为切片,并提取与电源相关的静态程序功能。切片被用作训练基于模糊小波人工神经网络的功率模型的基本单元。此步骤使程序员可以研究其应用程序的功耗曲线,并确定能耗更高的代码区域。为了提高预测精度,我们进一步按分支结构将GPU应用程序分为两类:稀疏分支和密集分支。提出了基于切片的稀疏分支程序的幂模型。对于密集分支程序,利用概率切片来减少无效切片,以提高准确性。通过使用典型的GPU基准对模型进行经验验证,并将结果与​​测得的功率进行比较。总体而言,我们的功率模型的平均误差小于6%。 (C)2015 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号