首页> 外文会议>Italian Workshop on Neural Networks >Combining Bagging, Boosting and Dagging for Classification Problems
【24h】

Combining Bagging, Boosting and Dagging for Classification Problems

机译:组合袋装,提升和削减分类问题

获取原文

摘要

Bagging, boosting and dagging are well known re-sampling ensemble methods that generate and combine a diversity of classifiers using the same learning algorithm for the base-classifiers. Boosting algorithms are considered stronger than bagging and dagging on noise-free data. However, there are strong empirical indications that bagging and dagging are much more robust than boosting in noisy settings. For this reason, in this work we built an ensemble using a voting methodology of bagging, boosting and dagging ensembles with 8 sub-classifiers in each one. We performed a comparison with simple bagging, boosting and dagging ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.
机译:袋装,升压和悲惨是众所周知的重新采样的集合方法,可以使用相同的基本分类器使用相同的学习算法来生成和组合分类器的多样性。升压算法被认为是比包装和无噪声数据更强大。然而,有强有力的经验迹象,袋装和悲惨比在嘈杂的环境中提升更强大。出于这个原因,在这项工作中,我们使用袋装,升压和削弱合奏的投票方法建立了一个合奏,每个乘积用8个分类器中的8个分类器。我们对具有25个子分类器的简单袋装,升压和垂钓集合进行了比较,以及其他众所周知的组合方法,在标准基准数据集和所提出的技术在大多数情况下具有更好的准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号