首页> 外文期刊>Neural computing & applications >Functional iterative approaches for solving support vector classification problems based on generalized Huber loss
【24h】

Functional iterative approaches for solving support vector classification problems based on generalized Huber loss

机译:基于广义Huber损失的求解支持向量分类问题的函数迭代方法

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Classical support vector machine (SVM) and its twin variant twin support vector machine (TWSVM) utilize the Hinge loss that shows linear behaviour, whereas the least squares version of SVM (LSSVM) and twin least squares support vector machine (LSTSVM) uses L-2-norm of error which shows quadratic growth. The robust Huber loss function is considered as the generalization of Hinge loss and L-2-norm loss that behaves like the quadratic L-2-norm loss for closer error points and the linear Hinge loss after a specified distance. Three functional iterative approaches based on generalized Huber loss function are proposed in this paper to solve support vector classification problems of which one is based on SVM, i.e. generalized Huber support vector machine and the other two are in the spirit of TWSVM, namely generalized Huber twin support vector machine and regularization on generalized Huber twin support vector machine. The proposed approaches iteratively find the solutions and eliminate the requirements to solve any quadratic programming problem (QPP) as for SVM and TWSVM. The main advantages of the proposed approach are: firstly, utilize the robust Huber loss function for better generalization and for lesser sensitivity towards noise and outliers as compared to quadratic loss; secondly, it uses functional iterative scheme to find the solution that eliminates the need to solving QPP and also makes the proposed approaches faster. The efficacy of the proposed approach is established by performing numerical experiments on several real-world datasets and comparing the result with related methods, viz. SVM, TWSVM, LSSVM and LSTSVM. The classification results are convincing.
机译:经典支持向量机 (SVM) 及其孪生变体孪生支持向量机 (TWSVM) 利用显示线性行为的铰链损失,而最小二乘版本的 SVM (LSSVM) 和孪生最小二乘支持向量机 (LSTSVM) 使用显示二次增长的 L-2 误差范数。鲁棒 Huber 损失函数被认为是铰链损耗和 L-2 模损耗的推广,其行为类似于误差点较近的二次 L-2 模损耗和指定距离后的线性铰链损耗。针对支持向量分类问题,提出了3种基于广义Huber损失函数的函数迭代方法,其中一种是基于SVM的广义Huber支持向量机,另外两种是基于TWSVM的精神,即广义Huber孪生支持向量机和广义Huber孪生支持向量机上的正则化。所提出的方法以迭代方式找到解决方案,并消除了解决任何二次规划问题(QPP)的要求,如SVM和TWSVM。该方法的主要优点是:首先,与二次损失相比,利用鲁棒的Huber损失函数实现更好的泛化,并降低对噪声和异常值的敏感性;其次,采用函数式迭代方案找到无需求解QPP的解,使所提方法得更快;通过对多个真实数据集进行数值实验,并将结果与相关方法(即SVM、TWSVM、LSSVM和LSTSVM)进行比较,确定了所提方法的有效性。分类结果令人信服。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号