首页> 外文期刊>Annals of the Institute of Statistical Mathematics >On constrained and regularized high-dimensional regression
【24h】

On constrained and regularized high-dimensional regression

机译:关于约束和正则化高维回归

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

High-dimensional feature selection has become increasingly crucial for seeking parsimonious models in estimation. For selection consistency, we derive one necessary and sufficient condition formulated on the notion of degree of separation. The minimal degree of separation is necessary for any method to be selection consistent. At a level slightly higher than the minimal degree of separation, selection consistency is achieved by a constrained L_(-0) -method and its computational surrogate - the constrained truncated L-1 -method. This permits up to exponentially many features in the sample size. In other words, these methods are optimal in feature selection against any selection method. In contrast, their regularization counterparts - the L_(-0) -regularization and truncated L_1 -regularization methods enable so under slightly stronger assumptions. More importantly, sharper parameter estimation/prediction is realized through such selection, leading to minimax parameter estimation. This, otherwise, is impossible in the absence of a good selection method for high-dimensional analysis.
机译:高维特征选择对于寻求估计中的简约模型变得越来越重要。对于选择的一致性,我们推导了一种基于分离度概念制定的必要条件和充分条件。对于任何方法来说,最小的分离度都是必须的,以确保选择的一致性。在略高于最小分离度的水平上,选择约束是通过约束L _(-0)方法及其计算替代方法(约束截断L-1方法)实现的。这样就可以在样本大小中实现多达几倍的特征。换句话说,相对于任何选择方法,这些方法在特征选择方面都是最优的。相比之下,它们的正则化对应项-L _(-0)-正则化和截断的L_1-正则化方法在稍强的假设下即可实现。更重要的是,通过这种选择实现了更清晰的参数估计/预测,从而导致了最小最大参数估计。否则,如果没有用于高维分析的好的选择方法,这是不可能的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号