...
首页> 外文期刊>IEEE Transactions on Knowledge and Data Engineering >Efficient Multi-Class Probabilistic SVMs on GPUs
【24h】

Efficient Multi-Class Probabilistic SVMs on GPUs

机译:GPU上的高效多类概率SVM

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Recently, many researchers have been working on improving other traditional machine learning algorithms (besides deep learning) using high-performance hardware such as Graphics Processing Units (GPUs). The recent success of machine learning is not only due to more effective algorithms, but also more efficient systems and implementations. In this paper, we propose a novel and efficient solution to multi-class SVMs with probabilistic output (MP-SVMs) accelerated by GPUs. MP-SVMs are an important technique for many pattern recognition applications. However, MP-SVMs are very time-consuming to use, because using an MP-SVM classifier requires training many binary SVMs and performing probability estimation by combining results of all the binary SVMs. GPUs have much higher computation capability than CPUs and are potentially excellent hardware to accelerate MP-SVMs. Still, two key challenges for efficient GPU accelerations for MP-SVM are: (i) many kernel values are repeatedly computed as a binary SVM classifier is trained iteratively, resulting in repeated accesses to the high latency GPU memory; (ii) performing training or estimating probability in a highly parallel way requires a much larger memory footprint than the GPU memory. To overcome the challenges, we propose a solution called GMP-SVM which exploits two-level (i.e., binary SVM level and MP-SVM level) optimization for training MP-SVMs and high parallelism for estimating probability. GMP-SVM reduces high latency memory accesses and memory consumption through batch processing, kernel value reusing and sharing, and support vector sharing. Experimental results show that GMP-SVM outperforms the GPU baseline by two to five times, and LibSVM with OpenMP by an order of magnitude. Also, GMP-SVM produces the same SVM classifier as LibSVM.
机译:最近,许多研究人员一直在使用诸如图形处理单元(GPU)之类的高性能硬件来改进其他传统的机器学习算法(除深度学习之外)。机器学习的最新成功不仅归因于更有效的算法,还归因于更有效的系统和实现。在本文中,我们为具有GPU加速的概率输出(MP-SVM)的多类SVM提供了一种新颖高效的解决方案。 MP-SVM是许多模式识别应用程序中的一项重要技术。但是,MP-SVM的使用非常耗时,因为使用MP-SVM分类器需要训练许多二进制SVM,并通过组合所有二进制SVM的结果来执行概率估计。 GPU具有比CPU高得多的计算能力,并且是加速MP-SVM的潜在优秀硬件。尽管如此,有效地加速MP-SVM的GPU面临两个主要挑战:(i)在对二进制SVM分类器进行迭代训练时,会重复计算许多内核值,从而导致对高延迟GPU内存的重复访问; (ii)以高度并行的方式执行训练或估计概率需要比GPU内存大得多的内存。为了克服挑战,我们提出了一种称为GMP-SVM的解决方案,该解决方案利用两级(即二进制SVM级别和MP-SVM级别)优化来训练MP-SVM,并使用高并行度来估计概率。 GMP-SVM通过批处理,内核值重用和共享以及支持向量共享,减少了高延迟内存访问和内存消耗。实验结果表明,GMP-SVM的性能比GPU基准高出2至5倍,而具有OpenMP的LibSVM则高出一个数量级。同样,GMP-SVM产生与LibSVM相同的SVM分类器。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号