首页> 外文OA文献 >Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel
【2h】

Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel

机译:快速增量SVDD与高斯内核学习算法

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Support vector data description (SVDD) is a machine learning technique thatis used for single-class classification and outlier detection. The idea of SVDDis to find a set of support vectors that defines a boundary around data. Whendealing with online or large data, existing batch SVDD methods have to be rerunin each iteration. We propose an incremental learning algorithm for SVDD thatuses the Gaussian kernel. This algorithm builds on the observation that allsupport vectors on the boundary have the same distance to the center of spherein a higher-dimensional feature space as mapped by the Gaussian kernelfunction. Each iteration only involves the existing support vectors and the newdata point. The algorithm is based solely on matrix manipulations; the supportvectors and their corresponding Lagrange multiplier $lpha_i$'s areautomatically selected and determined in each iteration. It can be seen thatthe complexity of our algorithm in each iteration is only $O(k^2)$, where $k$is the number of support vectors. Our experimental results on some real datasets show that our incremental algorithm achieves similar F-1 scores with muchless running time.
机译:支持向量数据描述(SVDD)是一种用于单级分类和异常检测的机器学习技术。 SVDDIS找到一组支持向量的概念,该向量定义了数据周围的边界。随着在线或大数据的突曲,现有的批量SVDD方法必须重新运用迭代。我们为高斯内核的SVDD提出了一种增量学习算法。该算法在观察中构建了边界上的AllSupport向量与映射到高斯内核的映射的距离与Spherein的中心相同的距离。每次迭代仅涉及现有的支持向量和NewData点。该算法仅基于矩阵操纵; SupportVectors及其相应的Lagrange乘数$ alpha_i $ sareutomic选择和在每次迭代中确定。可以看出,我们在每次迭代中的算法的复杂性只有O(k ^ 2)$,其中$ k $是支持向量的数量。我们对某些真实数据集的实验结果表明,我们的增量算法实现了类似的F-1分数,具有多得多的运行时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号