【24h】

A Gradient-Based Boosting Algorithm for Regression Problems

机译:基于梯度的回归问题增强算法

获取原文
获取原文并翻译 | 示例

摘要

In adaptive boosting, several weak learners trained sequentially are combined to boost the overall algorithm performance. Recently adaptive boosting methods for classification problems have been derived as gradient descent algorithms. This formulationjustifies key elements and parameters in the methods, all chosen to optimize a single common objective function. We propose an analogous formulation for adaptive boosting of regression problems, utilizing a novel objective function that leads to a simple boosting algorithm. We prove that this method reduces training error, and compare its performance to other regression methods.
机译:在自适应提升中,将几个经过顺序训练的弱学习者组合起来,以提升整体算法性能。最近,针对分类问题的自适应增强方法已被推导为梯度下降算法。该公式调整了方法中的关键要素和参数,所有要素和参数都经过选择以优化单个通用目标函数。我们提出了一种适用于回归问题的自适应增强的相似公式,它利用了导致简单的增强算法的新颖目标函数。我们证明了该方法可减少训练误差,并将其性能与其他回归方法进行比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号