首页> 外文期刊>IEEE Transactions on Signal Processing >Projected Stochastic Primal-Dual Method for Constrained Online Learning With Kernels
【24h】

Projected Stochastic Primal-Dual Method for Constrained Online Learning With Kernels

机译:预测的随机原始对偶方法用于带约束的在线学习

获取原文
获取原文并翻译 | 示例

摘要

We consider the problem of stochastic optimization with nonlinear constraints, where the decision variable is not vector-valued but instead a function belonging to a reproducing Kernel Hilbert Space (RKHS). Currently, there exist solutions to only special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theorem to a class of saddle-point problems defined over RKHS. Then, we develop a primal-dual method which that executes alternating projected primal/dual stochastic gradient descent/ascent on the dual-augmented Lagrangian of the problem. The primal projection sets are low-dimensional subspaces of the ambient function space, which are greedily constructed using matching pursuit. By tuning the projection-induced error to the algorithm step-size, we are able to establish mean convergence in both primal objective sub-optimality and constraint violation, to respective O(root T) and O(T-3/4) neighborhoods. Here, T is the final iteration index and the constant step-size is chosen as 1/ root T with 1/T approximation budget. Finally, we demonstrate experimentally the effectiveness of the proposed method for risk-aware supervised learning.
机译:我们考虑具有非线性约束的随机优化问题,其中决策变量不是矢量值,而是一个属于可再生内核希尔伯特空间(RKHS)的函数。当前,仅存在对此问题的特殊情况的解决方案。为了解决内核的这种受限问题,我们首先将Representer定理推广到基于RKHS定义的一类鞍点问题。然后,我们开发了一种原始-对偶方法,该方法对问题的双增广Lagrangian执行交替投影的原始/对偶随机梯度下降/上升。原始投影集是环境函数空间的低维子空间,使用匹配追踪贪婪地构建它们。通过将投影引起的误差调整到算法步长,我们能够在原始目标次优和约束违规两个方面建立均值收敛,分别收敛于O(root T)和O(T-3 / 4)邻域。在此,T是最终的迭代索引,并且将恒定步长选择为1 /根近似T / 1 / T。最后,我们通过实验证明了所提出的方法对于风险意识监督学习的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号