首页> 外文会议>Conference on Neural Information Processing Systems;Annual conference on Neural Information Processing Systems >Dual Averaging Method for Regularized Stochastic Learning and Online Optimization
【24h】

Dual Averaging Method for Regularized Stochastic Learning and Online Optimization

机译:正则随机学习和在线优化的双重平均法

获取原文

摘要

We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as ?_1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularization structure in an online setting. In particular, at each iteration, the learning variables are adjusted by solving a simple optimization problem that involves the running average of all past subgradients of the loss functions and the whole regularization term, not just its subgradient. Computational experiments show that the RDA method can be very effective for sparse online learning with ?_1 -regularization.
机译:我们考虑正则化的随机学习和在线优化问题,其中目标函数是两个凸项的和:一个是学习任务的损失函数,另一个是简单的正则化项,例如用于促进稀疏性的?_1范数。我们开发了一种新的在线算法,即正规化双重平均(RDA)方法,可以在在线环境中显式利用正规化结构。特别地,在每次迭代中,通过解决一个简单的优化问题来调整学习变量,该问题涉及损失函数的所有过去子梯度的运行平均值以及整个正则化项,而不仅仅是其子梯度。计算实验表明,RDA方法对于使用?_1-正规化进行稀疏在线学习非常有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号