首页> 外文会议>Annual conference on Neural Information Processing Systems >Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions
【24h】

Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions

机译:随机优化和稀疏统计恢复:高维的最佳算法

获取原文

摘要

We develop and analyze stochastic optimization algorithms for problems in which the expected loss is strongly convex, and the optimum is (approximately) sparse. Previous approaches are able to exploit only one of these two structures, yielding a O(d/T) convergence rate for strongly convex objectives in d dimensions and O(s(log d)/T)~(1/2) convergence rate when the optimum is s-sparse. Our algorithm is based on successively solving a series of ℓ_1 -regularized optimization problems using Nesterov's dual averaging algorithm. We establish that the error of our solution after T iterations is at most O(s(log d)/T), with natural extensions to approximate sparsity. Our results apply to locally Lipschitz losses including the logistic, exponential, hinge and least-squares losses. By recourse to statistical minimax results, we show that our convergence rates are optimal up to constants. The effectiveness of our approach is also confirmed in numerical simulations where we compare to several baselines on a least-squares regression problem.
机译:我们针对预期损失强凸且最优(稀疏)的问题开发并分析了随机优化算法。先前的方法仅能利用这两种结构中的一种,在d维上产生强凸物镜的O(d / T)收敛速度,而在D维时为O(s(log d)/ T)〜(1/2)收敛速度最佳是s稀疏。我们的算法基于使用Nesterov的对偶平均算法连续解决一系列ℓ_1正规化优化问题的基础。我们确定,经过T次迭代后,我们的解决方案的误差最多为O(s(log d)/ T),并且自然扩展为近似稀疏性。我们的结果适用于局部Lipschitz损失,包括逻辑损失,指数损失,铰链损失和最小二乘损失。通过求取统计极大极小值的结果,我们表明收敛速率在不超过常数的情况下是最佳的。我们的方法的有效性在数值模拟中得到了证实,在数值模拟中我们与最小二乘回归问题的几个基线进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号