首页> 外文期刊>IEEE Transactions on Signal Processing >Projected Stochastic Primal-Dual Method for Constrained Online Learning With Kernels
【24h】

Projected Stochastic Primal-Dual Method for Constrained Online Learning With Kernels

机译:预计随机原始方法,用于核对核对在线学习

获取原文
获取原文并翻译 | 示例

摘要

We consider the problem of stochastic optimization with nonlinear constraints, where the decision variable is not vector-valued but instead a function belonging to a reproducing Kernel Hilbert Space (RKHS). Currently, there exist solutions to only special cases of this problem. To solve this constrained problem with kernels, we first generalize the Representer Theorem to a class of saddle-point problems defined over RKHS. Then, we develop a primal-dual method which that executes alternating projected primal/dual stochastic gradient descent/ascent on the dual-augmented Lagrangian of the problem. The primal projection sets are low-dimensional subspaces of the ambient function space, which are greedily constructed using matching pursuit. By tuning the projection-induced error to the algorithm step-size, we are able to establish mean convergence in both primal objective sub-optimality and constraint violation, to respective O(root T) and O(T-3/4) neighborhoods. Here, T is the final iteration index and the constant step-size is chosen as 1/ root T with 1/T approximation budget. Finally, we demonstrate experimentally the effectiveness of the proposed method for risk-aware supervised learning.
机译:我们考虑与非线性约束的随机优化问题,其中决策变量不是向量值,而是属于再现内核HILBERT空间(RKHS)的函数。目前,仅存在解决问题的特殊情况。要解决内核的这一受限问题,我们首先将代表定理概括为在RKHS上定义的一类鞍点问题。然后,我们开发了一种原始方法,其在问题的双增强拉格朗日执行交替投影的原始/双随机梯度下降/上升。原始投影集是环境函数空间的低维子空间,其使用匹配追踪贪婪地构造。通过将投影引起的误差调整到算法的步骤大小,我们能够在原始目标子 - 最优性和约束违规中建立平均会聚,相应的O(根T)和O(T-3/4)邻域。这里,T是最终的迭代索引,并且恒定的步长大小被选为1 /根T,其中1 / T近似预算。最后,我们展示了建议的风险感知监督学习方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号