首页> 中文期刊>哈尔滨工业大学学报 >面向异构并行架构的大规模原型学习算法

面向异构并行架构的大规模原型学习算法

     

摘要

Current learning algorithms for prototype learning require intensive computation burden for large category machine learning and pattern recognition fields. To solve this bottleneck problem, a principled scalable prototype learning method is proposed based on heterogeneous parallel computing architecture of GPUs and CPUs. The method can transfer the intense workload to the GPU side instead of CPU side through splitting and rearranging the computing task, so that only a few control process is needed to be managed by the CPU. Meanwhile, the method has the ability to adaptively choose the strategies between tiling and reduction depending on its workload. Our evaluations on a large Chinese character database show that up to 194X speedup can be achieved in the case of mini-batch when evaluated on a consumer⁃level card of GTX 680. When a new GTX980 card is used, it can scale up to 638X. Even to the more difficult SGD occasion, a more than 30⁃fold speedup is observed. The proposed framework possess a high scalability while preserving its performance precision, and can effectively solve the bottleneck problems in prototype learning.%为解决当前原型学习算法在大规模、大类别机器学习和模式识别领域的计算密集瓶颈问题,提出一种采用GPU和CPU异构并行计算架构的可扩展原型学习算法框架。一是通过分解和重组算法的计算任务,将密集的计算负载转移到GPU上,而CPU只需进行少量的流程控制。二是根据任务类型自适应地决定是采用分块策略还是并行归约策略来实现。采用大规模手写汉字样本库验证本框架,在消费级显卡GTX680上使用小批量处理模式进行模型学习时,最高可得到194倍的加速比,升级到GTX980显卡,加速比可提升到638倍;算法甚至在更难以加速的随机梯度下降模式下,也至少能获得30倍的加速比。该算法框架在保证识别精度的前提下具有很高的可扩展性,能够有效解决原有原型学习的计算瓶颈问题。

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号