首页> 外文会议>SoutheastCon >Fast online algorithms for Support Vector Machines
【24h】

Fast online algorithms for Support Vector Machines

机译:用于支持向量机的快速在线算法

获取原文
获取外文期刊封面目录资料

摘要

A novel online, i.e. stochastic gradient, learning algorithm in a primal domain is introduced and its performance is compared to the Sequential Minimal Optimization (SMO) based algorithm for training L1 Support Vector Machines (SVMs) implemented within MATLAB's SVM solver fitcsvm. Their performances are compared on both real and artificial datasets, which contain up to 15,000 samples. These datasets belong to the small and medium class of datasets today. We have shown that classic online learning algorithms implemented in the primal domain can be both extremely efficient and faster up to two orders of magnitude in respect to the SMO algorithm. In particular, unlike the SMO algorithm, our simulations show that the CPU time of OL SVM does not depend on SVM design parameters (penalty parameter C and Gaussian kernel shape parameter s or on the order of the polynomial kernel). This property is very beneficial for the SVMs' training faced with large and ultra-large datasets. Paper also compares OL SVM with the established stochastic gradient algorithms, Norma and a novel version of an online SVM training algorithm Pegasos dubbed here Pegaz.
机译:引入了新颖的在线,即在Matlab的SVM求解器FITCSVM中实现的训练L1支持向量机(SVM)的顺序最优优化(SMO)训练训练算法,将其性能介绍。它们的性能与真实的和人工数据集进行比较,其含有高达15,000个样本。这些数据集今天属于小型和中等类数据集。我们已经表明,在原始域中实现的经典在线学习算法可以非常有效,并且在SMO算法方面可以非常高效且更快地达到两个数量级。特别是,与SMO算法不同,我们的模拟表明,OL SVM的CPU时间不依赖于SVM设计参数(惩罚参数C和高斯内核形状参数S或在多项式内核的顺序上)。此属性对面对大型和超大数据集的SVMS培训非常有益。纸张还将OL SVM与既定的随机梯度算法,NORMA和新颖版本的在线SVM培训算法Pegasos Pegaz进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号