首页> 外文会议>AAAI Conference on Artificial Intelligence >Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel
【24h】

Fast Incremental SVDD Learning Algorithm with the Gaussian Kernel

机译:快速增量SVDD学习算法与高斯内核

获取原文

摘要

Support vector data description (SVDD) is a machine learning technique that is used for single-class classification and outlier detection. The idea of SVDD is to find a set of support vectors that defines a boundary around data. When dealing with online or large data, existing batch SVDD methods have to be rerun in each iteration. We propose an incremental learning algorithm for SVDD that uses the Gaussian kernel. This algorithm builds on the observation that all support vectors on the boundary have the same distance to the center of sphere in a higher-dimensional feature space as mapped by the Gaussian kernel function. Each iteration involves only the existing support vectors and the new data point. Moreover, the algorithm is based solely on matrix manipulations; the support vectors and their corresponding Lagrange multiplier α_i's are automatically selected and determined in each iteration. It can be seen that the complexity of our algorithm in each iteration is only O(k~2), where k is the number of support vectors. Experimental results on some real data sets indicate that FISVDD demonstrates significant gains in efficiency with almost no loss in either outlier detection accuracy or objective function value.
机译:支持向量数据描述(SVDD)是一种机器学习技术,用于单级分类和异常检测。 SVDD的想法是找到一组支持向量,它定义数据周围的边界。在在线或大数据处理时,现有批处理SVDD方法必须在每次迭代中重新运行。我们为使用高斯内核的SVDD提出了一种增量学习算法。该算法在观察中构建了边界上的所有支持向量与高斯内核功能映射的高维特征空间中的与球体中心相同的距离。每次迭代仅涉及现有的支持向量和新数据点。此外,该算法仅基于矩阵操纵;在每次迭代中自动选择并确定支持向量及其相应的拉格朗日乘法器α_i。可以看出,我们在每次迭代中的算法的复杂性仅是O(k〜2),其中k是支持向量的数量。一些实际数据集的实验结果表明FISVDD以效率显着提高,几乎没有异常检测精度或客观函数值。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号