首页> 外文会议>International conference on Supercomputing >Approximate kernel matrix computation on GPUs forlarge scale learning applications
【24h】

Approximate kernel matrix computation on GPUs forlarge scale learning applications

机译:适用于大规模学习应用的GPU上的近似内核矩阵计算

获取原文

摘要

Kernel-based learning methods require quadratic space and time complexities to compute the kernel matrix. These complexities limit the applicability of kernel methods to large scale problems with millions of data points. In this paper, we introduce a novel representation of kernel matrices on Graphics Processing Units (GPU). The novel representation exploits the sparseness of the kernel matrix to address the space complexity problem. It also respects the guidelines for memory access on GPUs, which are critical for good performance, to address the time complexity problem. Our representation utilizes the locality preserving properties of space filling curves to obtain a band approximation of the kernel matrix. To prove the validity of the representation, we use Affinity Propagation, an unsupervised clustering algorithm, as an example of kernel methods. Experimental results show a 40x speedup of AP using our representation without degradation in clustering performance.
机译:基于内核的学习方法需要二次空间和时间复杂度才能计算内核矩阵。这些复杂性将内核方法的适用性限制在具有数百万个数据点的大规模问题上。在本文中,我们介绍了图形处理单元(GPU)上内核矩阵的新颖表示形式。新的表示法利用核矩阵的稀疏性来解决空间复杂性问题。它也遵守GPU上的内存访问准则,该准则对于提高性能至关重要,以解决时间复杂性问题。我们的表示利用空间填充曲线的局部保留特性来获得核矩阵的带近似。为了证明表示的有效性,我们以无监督聚类算法Affinity Propagation作为内核方法的示例。实验结果表明,使用我们的表示,AP的速度提高了40倍,而群集性能却没有下降。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号