首页> 外文会议>Mining Complex Data >Improving Boosting by Exploiting Former Assumptions
【24h】

Improving Boosting by Exploiting Former Assumptions

机译:利用以前的假设改善提振

获取原文
获取原文并翻译 | 示例

摘要

The error reduction in generalization is one of the principal motivations of research in machine learning. Thus, a great number of work is carried out on the classifiers aggregation methods in order to improve generally, by voting techniques, the performance of a single classifier. Among these methods of aggregation, we find the Boosting which is most practical thanks to the adaptive update of the distribution of the examples aiming at increasing in an exponential way the weight of the badly classified examples. However, this method is blamed because of overfitting, and the convergence speed especially with noise. In this study, we propose a new approach and modifications carried out on the algorithm of AdaBoost. We will demonstrate that it is possible to improve the performance of the Boosting, by exploiting assumptions generated with the former iterations to correct the weights of the examples. An experimental study shows the interest of this new approach, called hybrid approach.
机译:泛化中的错误减少是机器学习研究的主要动机之一。因此,为了通过投票技术总体上改善单个分类器的性能,对分类器聚合方法进行了大量工作。在这些聚合方法中,我们发现Boosting最实用,这要归功于示例分布的自适应更新,旨在以指数方式增加不良分类示例的权重。但是,由于过度拟合以及收敛速度(尤其是噪声),导致该方法受到指责。在这项研究中,我们提出了一种新的方法和对AdaBoost算法进行的修改。我们将证明通过利用由先前迭代生成的假设来校正示例的权重,可以改善Boosting的性能。实验研究表明了这种称为混合方法的新方法的兴趣。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号