首页> 外文期刊>Very Large Scale Integration (VLSI) Systems, IEEE Transactions on >Hybrid Working Set Algorithm for SVM Learning With a Kernel Coprocessor on FPGA
【24h】

Hybrid Working Set Algorithm for SVM Learning With a Kernel Coprocessor on FPGA

机译:在FPGA上使用内核协处理器进行SVM学习的混合工作集算法

获取原文
获取原文并翻译 | 示例
           

摘要

Support vector machines (SVM) are a popular class of supervised models in machine learning. The associated compute intensive learning algorithm limits their use in real-time applications. This paper presents a fully scalable architecture of a coprocessor, which can compute multiple rows of the kernel matrix in parallel. Further, we propose an extended variant of the popular decomposition technique, sequential minimal optimization, which we call hybrid working set (HWS) algorithm, to effectively utilize the benefits of cached kernel columns and the parallel computational power of the coprocessor. The coprocessor is implemented on Xilinx Virtex 7 field-programmable gate array-based VC707 board and achieves a speedup of upto for kernel computation over single threaded computation on Intel Core i5. An application speedup of upto over software implementation of and speedup of upto over SVM is achieved using the HWS algorithm in unison with the coprocessor. The reduction in the number of iterations and sensitivity of the optimization time to variation in cache size using the HWS algorithm are also shown.
机译:支持向量机(SVM)是机器学习中一种流行的监督模型。相关的计算密集型学习算法限制了它们在实时应用程序中的使用。本文提出了一种完全可扩展的协处理器架构,该架构可以并行计算内核矩阵的多行。此外,我们提出了一种流行分解技术的扩展变体,即顺序最小优化,我们将其称为混合工作集(HWS)算法,以有效利用缓存的内核列的好处和协处理器的并行计算能力。该协处理器在基于Xilinx Virtex 7现场可编程门阵列的VC707板上实现,与Intel Core i5上的单线程计算相比,内核计算的加速速度最高。使用HWS算法与协处理器一起,可以实现软件的最高实现速度和SVM的最高提升速度。还显示了使用HWS算法减少迭代次数和优化时间对缓存大小变化的敏感性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号