首页> 中文期刊> 《软件学报》 >基于选择性集成的最大化软间隔算法

基于选择性集成的最大化软间隔算法

         

摘要

当前,boosting集成学习算法研究主要集中于最大化弱学习器凸组合的间隔或软间隔,该凸组合几乎使用了生成的所有弱学习器,然而这些弱学习器间存在大量的相关性和冗余,增加了训练和分类过程的时空复杂度.针对这一问题,在LPBoost基础上提出了一种选择性boosting集成学习算法,称为SelectedBoost.在每次迭代生成新的弱学习器以后,通过计算新生成的弱学习器与已有弱学习器的相关度和差异度,并结合当前集成的强学习器的准确率来判断是否选择该弱学习器.另外,当前的一系列boosting算法(如AdaBoost,LPBoost,ERLPBoost等),本质上是基于已生成的1个或者多个弱学习器来更新样本权重,但与弱学习器相比,强学习器更能代表当前的决策面.因此,SelectedBoost通过在带约束的间隔最大化问题中引入更加严格的强学习器边界约束条件,使得该算法不仅参考弱学习器边界,同时还参考已生成的强学习器来更新样本权重,进而提高算法的收敛速度.最后,与其他有代表性的集成学习算法进行实验比较,结果表明,该方法在收敛率、分类准确性以及泛化能力等方面均具有比较明显的优势.%Research of traditional boosting algorithms mainly focuses on maximizing the hard or soft margin of the convex combination among weak hypotheses. The weak learners are often all used in the combination, even though some of them are more, or less related. This increases the time complexity of the hypotheses' training and test. To ease the redundancies of the base hypotheses, this paper presents a selective boosting algorithm called SelectedBoost for classifying binary labeled samples, which is based on LPBoost. The main idea of the algorithm is to discard as many hypotheses as possible according to their relevance and diversity. Furthermore, this paper introduces an edge constraint for every strong hypothesis to speed up the convergence when maximizing the soft margin of the combination of the weak hypotheses. The experimental results show that this algorithm can achieve both better performance and less generalization error compared to some representative boosting algorithms.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号