首页> 美国卫生研究院文献>other >OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS
【2h】

OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS

机译:稀疏非凸学习问题的最优计算和统计收敛速度

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.
机译:我们提供了惩罚性M估计量的统计和计算属性的理论分析,可以将其公式化为可能的非凸优化问题的解决方案。许多重要的估算器都属于此类,包括具有非凸正则化的最小二乘回归,具有非凸正则化的广义线性模型以及稀疏椭圆随机设计回归。对于这些问题,由于非凸公式,因此难以计算整体解。在本文中,我们提出了一种近似正则化路径跟踪方法,用于解决具有非凸目标函数的各种学习问题。在统一的分析框架下,我们同时为该算法获得的任何局部解提供了显式的统计和计算收敛速度。通过计算,我们的算法获得了用于计算完整正则化路径的全局几何收敛速度,这在所有一阶算法中都是最佳的。与大多数现有方法只为一个正则化参数获得几何收敛速度不同,我们的算法以相同的迭代复杂度计算完整的正则化路径。特别是,我们提供了改进的迭代复杂度,以明确表征正则化路径上每个阶段的性能。从统计上讲,我们为沿着正则化路径的所有近似局部解提供了清晰的样本复杂度分析。尤其是,我们的分析通过提供更精确的样本复杂度范围以及最终估算器的准确支持回收结果来改善现有结果。这些结果表明,由于使用了非凸罚分,最终估计量达到了预言性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号