首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Model Consistency for Learning with Mirror-Stratifiable Regularizers
【24h】

Model Consistency for Learning with Mirror-Stratifiable Regularizers

机译:使用可镜像分层的正则化器进行学习的模型一致性

获取原文
           

摘要

Low-complexity non-smooth convex regularizers are routinely used to impose some structure (such as sparsity or low-rank) on the coefficients for linear predictors in supervised learning. Model consistency consists then in selecting the correct structure (for instance support or rank) by regularized empirical risk minimization. It is known that model consistency holds under appropriate non-degeneracy conditions. However such conditions typically fail for highly correlated designs and it is observed that regularization methods tend to select larger models. In this work, we provide the theoretical underpinning of this behavior using the notion of mirror-stratifiable regularizers. This class of regularizers encompasses the most well-known in the literature, including the L1 or trace norms. It brings into play a pair of primal-dual models, which in turn allows one to locate the structure of the solution using a specific dual certificate. We also show how this analysis is applicable to optimal solutions of the learning problem, and also to the iterates computed by a certain class of stochastic proximal-gradient algorithms.
机译:低复杂度非光滑凸正则化函数通常用于在监督学习中将线性预测变量的系数强加一些结构(例如稀疏性或低等级)。然后,模型一致性在于通过规范化的经验风险最小化来选择正确的结构(例如支持或等级)。已知模型一致性在适当的非退化条件下成立。然而,这种情况通常对于高度相关的设计而言是失败的,并且观察到正则化方法倾向于选择更大的模型。在这项工作中,我们使用可镜像分层的正则化器的概念为这种行为提供了理论基础。此类正则化器涵盖了文献中最著名的,包括L1或跟踪规范。它发挥了一对原始对偶模型,从而允许人们使用特定的双重证书来定位解决方案的结构。我们还展示了这种分析如何适用于学习问题的最佳解决方案,以及如何适用于由一类随机近端梯度算法计算出的迭代次数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号