首页> 外文期刊>scandinavian journal of statistics >Improving Lasso for model selection and prediction
【24h】

Improving Lasso for model selection and prediction

机译:Improving Lasso for model selection and prediction

获取原文
获取原文并翻译 | 示例
       

摘要

It is known that the Thresholded Lasso (TL), SCAD or MCP correct intrinsic estimation bias of the Lasso. In this paper we propose an alternative method of improving the Lasso for predictive models with general convex loss functions which encompass normal linear models, logistic regression, quantile regression, or support vector machines. For a given penalty we order the absolute values of the Lasso nonzero coefficients and then select the final model from a small nested family by the Generalized Information Criterion. We derive exponential upper bounds on the selection error of the method. These results confirm that, at least for normal linear models, our algorithm seems to be the benchmark for the theory of model selection as it is constructive, computationally efficient and leads to consistent model selection under weak assumptions. Constructivity of the algorithm means that, in contrast to the TL, SCAD or MCP, consistent selection does not rely on the unknown parameters as the cone invertibility factor. Instead, our algorithm only needs the sample size, the number of predictors and an upper bound on the noise parameter. We show in numerical experiments on synthetic and real-world datasets that an implementation of our algorithm is more accurate than implementations of studied concave regularizations. Our procedure is included in the R package DMRnet and available in the CRAN repository.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号