首页> 外文会议> >Theoretical and Empirical Analysis of a GPU Based Parallel Bayesian Optimization Algorithm
【24h】

Theoretical and Empirical Analysis of a GPU Based Parallel Bayesian Optimization Algorithm

机译:基于GPU的并行贝叶斯优化算法的理论和经验分析

获取原文

摘要

General purpose computing over graphical processing units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges and design choices involved in parallelization of Bayesian optimization algorithm (BOA) to solve complex combinatorial optimization problems over nVidia commodity graphics hardware using compute unified device architecture (CUDA). BOA is a well-known multivariate estimation of distribution algorithm (EDA) that incorporates methods for learning Bayesian network (BN). It then uses BN to sample new promising solutions. Our implementation is fully compatible with modern commodity GPUs and therefore we call it gBOA (BOA on GPU). In the results section, we show several numerical tests and performance measurements obtained by running gBOA over an nVidia Tesla C1060 GPU. We show that in the best case we can obtain a speedup of up to 13x.
机译:图形处理单元(GPGPU)上的通用计算是并行计算中范式的巨大转变,有望显着提高性能。但是GPGPU在算法设计和软件开发方面也带来了前所未有的复杂性。在本文中,我们描述了贝叶斯优化算法(BOA)并行化所面临的挑战和设计选择,以使用计算统一设备体系结构(CUDA)解决nVidia商品图形硬件上的复杂组合优化问题。 BOA是一种众所周知的分布算法多元估计(EDA),其中包含用于学习贝叶斯网络(BN)的方法。然后,它使用BN来采样新的有前途的解决方案。我们的实现与现代商用GPU完全兼容,因此我们将其称为gBOA(GPU上的BOA)。在结果部分中,我们显示了通过在nVidia Tesla C1060 GPU上运行gBOA进行的一些数值测试和性能测量。我们证明,在最佳情况下,我们可以获得高达13倍的加速比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号