首页> 外文期刊>Journal of machine learning research >Boosting as a Regularized Path to a Maximum Margin Classifier
【24h】

Boosting as a Regularized Path to a Maximum Margin Classifier

机译:提升为最大保证金分类器的常规路径

获取原文
获取外文期刊封面目录资料

摘要

In this paper we study boosting methods from a new perspective. Webuild on recent work by Efron et al. to show that boostingapproximately (and in some cases exactly) minimizes its losscriterion with an l1 constraint on the coefficient vector. Thishelps understand the success of boosting with early stopping asregularized fitting of the loss criterion. For the two mostcommonly used criteria (exponential and binomial log-likelihood),we further show that as the constraint is relaxed---orequivalently as the boosting iterations proceed---the solutionconverges (in the separable case) to an "l1-optimal"separating hyper-plane. We prove that this l1-optimalseparating hyper-plane has the property of maximizing the minimall1-margin of the training data, as defined in the boostingliterature. An interesting fundamental similarity between boostingand kernel support vector machines emerges, as both can bedescribed as methods for regularized optimization inhigh-dimensional predictor space, using a computational trick tomake the calculation practical, and converging tomargin-maximizing solutions. While this statement describes SVMsexactly, it applies to boosting only approximately. color="gray">
机译:在本文中,我们从新的角度研究了助推方法。我们以Efron等人的最新工作为基础。显示出在系数向量上受到 l 1 约束的情况下,近似增强(在某些情况下确实是)增强了其损失准则。这有助于了解通过提前停止损失准则的标准拟合来增强增压的成功。对于两个最常用的标准(指数和二项式对数似然),我们进一步证明,随着约束的放松(等效地,随着增强迭代的进行),解决方案收敛(在可分情况下)为“ l 1 -“最优”分离超平面。我们证明了 l 1 -最优分离超平面具有最大化最小 l 1 -margin的特性提升数据中定义的训练数据。 Boosting和内核支持向量机之间出现了一个有趣的基本相似之处,因为两者都可以描述为在高维预测变量空间中进行规则化优化的方法,使用计算技巧使计算变得可行,并收敛于极限最大化解。虽然此声明准确地描述了SVM,但仅适用于大约提升。 color =“ gray”>

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号