首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Non-Linear Gradient Boosting for Class-Imbalance Learning
【24h】

Non-Linear Gradient Boosting for Class-Imbalance Learning

机译:用于类不平衡学习的非线性梯度提升

获取原文
           

摘要

Gradient boosting relies on linearly combining diverse and weak hypotheses to build a strong classifier. In the class imbalance setting, boosting algorithms often require many hypotheses which tend to be more complex and may increase the risk of overfitting. We propose in this paper to address this issue by adapting the gradient boosting framework to a non-linear setting. In order to learn the idiosyncrasies of the target concept and prevent the algorithm from being biased toward the majority class, we suggest to jointly learn different combinations of the same set of very weak classifiers and expand the expressiveness of the final model by leveraging their non-linear complementarity. We perform an extensive experimental study using decision trees and show that, while requiring much less weak learners with a lower complexity (fewer splits per tree), our model outperforms standard linear gradient boosting.
机译:梯度提升依赖于线性组合各种假设和弱假设,以建立一个强大的分类器。在类不平衡设置中,增强算法通常需要许多假设,这些假设往往更加复杂,并可能增加过度拟合的风险。我们在本文中提出通过将梯度提升框架调整为非线性设置来解决此问题。为了了解目标概念的特质并防止算法偏向多数类,我们建议共同学习同一组非常弱的分类器的不同组合,并通过利用它们的非分类器来扩展最终模型的表达性。线性互补。我们使用决策树进行了广泛的实验研究,结果表明,尽管所需的弱势学习者数量少得多且复杂度较低(每棵树的拆分次数更少),但我们的模型却优于标准线性梯度提升。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号