首页> 外文会议>International Conference on Machine Learning >Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
【24h】

Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

机译:未规则大小:近似近端点和更快的随机算法,用于实证风险最小化

获取原文

摘要

We develop a family of accelerated stochastic algorithms that optimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework, based on the classical proximal point algorithm, useful for accelerating recent fast stochastic algorithms in a black-box fashion. Empirically, we demonstrate that the resulting algorithms exhibit notions of stability that are advantageous in practice. Both in theory and in practice, the provided algorithms reap the computational benefits of adding a large strongly convex regularization term, without incurring a corresponding bias to the original ERM problem.
机译:我们开发了一系列加速随机算法,可优化凸函数的总和。我们的算法改善了经验风险最小化(ERM)的最快运行时间(ERM),并且特别是线性最小二乘回归,横跨各种问题设置。为此,我们基于经典近端点算法建立一个框架,可用于以黑盒式方式加速最近的快速随机算法。凭经验,我们证明所得算法表现出在实践中具有有利的稳定性概念。无论是在理论和实践中,所提供的算法都会收获添加大强凸正则化术语的计算益处,而不会产生对应的ERM问题的相应偏差。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号