【24h】

Robustified L_2 boosting

机译:稳定的L_2增压

获取原文
获取原文并翻译 | 示例
           

摘要

Five robustifications of L2 boosting for linear regression with various robustness properties are considered. The first two use the Huber loss as implementing loss function for boosting and the second two use robust simple linear regression for the fitting in L2 boosting (i.e. robust base learners). Both concepts can be applied with or without down-weighting of leverage points. Our last method uses robust correlation estimates and appears to be most robust. Crucial advantages of all methods are that they do not compute covariance matrices of all covariates and that they do not have to identify multivariate leverage points. When there are no outliers, the robust methods are only slightly worse than L2 boosting. In the contaminated case though, the robust methods outperform L2 boosting by a large margin. Some of the robustifications are also computationally highly efficient and therefore well suited for truly high-dimensional problems.
机译:考虑了具有各种鲁棒性的线性回归的L2增强的五个鲁棒性。前两个将Huber损失用作实现增强的损失函数,后两个使用鲁棒的简单线性回归来拟合L2增强(即鲁棒的基础学习者)。可以在降低杠杆点权重或不降低权重的情况下应用这两种概念。我们的最后一种方法使用鲁棒的相关估计,并且似乎是最鲁棒的。所有方法的关键优势在于它们无需计算所有协变量的协方差矩阵,并且不必识别多元杠杆点。当没有异常值时,健壮的方法仅比L2增强稍差一些。但是,在受污染的情况下,健壮的方法在很大程度上提高了L2的性能。一些稳健性在计算上也是高效的,因此非常适合真正的高维问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号