...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Lipschitz and Comparator-Norm Adaptivity in Online Learning
【24h】

Lipschitz and Comparator-Norm Adaptivity in Online Learning

机译:Lipschitz和在线学习中的比较器 - 规范适应性

获取原文
           

摘要

We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using $O(d)$ time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time $O(d^3)$ per round. We then generalize two prior reductions to the unbounded setting; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work). We discuss their optimality in light of prior and new lower bounds. We apply our methods to obtain sharper regret bounds for scale-invariant online prediction with linear models.
机译:我们在没有预测性的环境中研究在线凸优化,其中任何预测和梯度都受到限制。目标是同时适应梯度和比较器的序列。我们首先使用提示为简化的设置开发无参数和无尺寸的算法。我们展示了两个版本:首先使用每轮$ O(d)$时间分别适应两个比较器和梯度的平方规范,第二次适应其平方内部产品(仅在比较器方向上测量方差) O(d ^ 3)每轮$ $ $。然后,我们概括了两个以前的未绑定设置的减少;一个不需要提示,以及第二个以处理范围比率问题(已经在事先工作中出现)。我们根据先前和新的下界讨论了它们的最优性。我们使用线性模型应用我们的方法来获取尺寸不变的在线预测的刻度后悔界限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号