首页> 外文期刊>Neurocomputing >A heuristic training for support vector regression
【24h】

A heuristic training for support vector regression

机译:支持向量回归的启发式训练

获取原文
获取原文并翻译 | 示例
       

摘要

A heuristic method for accelerating support vector machine (SVM) training based on a measurement of similarity among samples is presented in this paper. To train SVM, a quadratic function with linear constraints is optimized. The original formulation of the objective function of an SVM is efficient during optimization phase, but the yielded discriminant function often contains redundant terms. The economy of the discriminant function of an SVM is dependent on a sparse subset of the training data, say, selected support vectors by optimization techniques. The motivation for using a sparse controlled version of an SVM is therefore a practical issue since it is the requirement of decreasing computation expense during the SVM testing and enhancing the ability to interpretation of the model. Besides the existing approaches, an intuitive way to achieve this task is to control support vectors sparsely by reducing training data without discounting generalization performance. The most attractive feature of the idea is to make SVM training fast especially for training data of large size because the size of optimization problem can be decreased greatly. In this paper, a heuristic rule is utilized to reduce training data for support vector regression (SVR). At first, all the training data are divided into several groups, and then for each group, some training vectors will be discarded based on the measurement of similarity among samples. The prior reduction process is carried out in the original data space before SVM training, so the extra computation expense may be rarely taken into account. Even considering the preprocessing cost, the total spending time is still less than that for training SVM with the complete training set. As a result, the number of vectors for SVR training becomes small and the training time can be decreased greatly without compromising the generalization capability of SVMs. Simulating results show the effectiveness of the presented method.
机译:提出了一种基于样本相似度度量的加速支持向量机(SVM)训练的启发式方法。为了训练SVM,对具有线性约束的二次函数进行了优化。 SVM目标函数的原始公式在优化阶段是有效的,但是得出的判别函数通常包含冗余项。 SVM判别函数的经济性取决于训练数据的稀疏子集,例如通过优化技术选择的支持向量。因此,使用稀疏控制版本的SVM的动机是一个实际问题,因为这是在SVM测试期间减少计算开销并增强模型解释能力的要求。除现有方法外,实现此任务的一种直观方法是通过减少训练数据来稀疏地控制支持向量,而不会降低泛化性能。这种想法最吸引人的特点是,由于可以大大减小优化问题的大小,因此可以快速进行SVM训练,尤其是对于大数据的训练。在本文中,启发式规则用于减少支持向量回归(SVR)的训练数据。首先,将所有训练数据分为几组,然后针对每组,基于样本之间的相似性度量丢弃一些训练向量。先前的约简过程是在SVM训练之前在原始数据空间中执行的,因此可能很少考虑额外的计算开销。即使考虑到预处理成本,总花费时间仍少于使用完整训练集训练SVM所花费的时间。结果,用于SVR训练的向量的数量变少,并且可以在不损害SVM的泛化能力的情况下大大减少训练时间。仿真结果表明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号