【24h】

Faster Support Vector Machines

机译:更快的支持矢量机器

获取原文

摘要

The time complexity of support vector machines (SVMs) prohibits training on huge data sets with millions of samples. Recently, multilevel approaches to train SVMs have been developed to allow for time efficient training on huge data sets. While regular SVMs perform the entire training in one - time consuming - optimization step, multilevel SVMs first build a hierarchy of problems decreasing in size that resemble the original problem and then train an SVM model for each hierarchy level benefiting from the solved models of previous levels. We present a faster multilevel support vector machine that uses a label propagation algorithm to construct the problem hierarchy. Extensive experiments show that our new algorithm achieves speed-ups of up to two orders of magnitude while having similar or better classification quality over state-of-the-art algorithms.
机译:支持向量机(SVM)的时间复杂性禁止培训数百万个样本的巨大数据集。最近,已经开发了多级培训SVM的方法,以允许在巨大的数据集上进行时间培训。虽然常规SVMS在一次性耗尽步骤中执行整个培训 - 优化步骤,但是多级SVMS首先构建一个问题的层次结构,其大小减少了类似原始问题的大小,然后为每个层次结构级别培训SVM模型,从而从先前级别的解决模型中受益。我们提供了更快的多级支持向量机,它使用标签传播算法来构造问题层次结构。广泛的实验表明,我们的新算法在最先进的算法上实现了高达两个数量级的加速度,同时具有相似或更好的分类质量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号