...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Logistic Regression Regret: What’s the Catch?
【24h】

Logistic Regression Regret: What’s the Catch?

机译:Logistic回归后悔:捕获是什么?

获取原文
           

摘要

We address the problem of the achievable regret rates with online logistic regression. We derive lower bounds with logarithmic regret under $L_1$, $L_2$, and $L_infty$ constraints on the parameter values. The bounds are dominated by $d/2 log T$, where $T$ is the horizon and $d$ is the dimensionality of the parameter space. We show their achievability for $d=o(T^{1/3})$ in all these cases with Bayesian methods, that achieve them up to a $d/2 log d$ term. Interesting different behaviors are shown for larger dimensionality. Specifically, on the negative side, if $d = Omega(sqrt{T})$, any algorithm is guaranteed regret of $Omega(d log T)$ (greater than $Theta(sqrt{T})$) under $L_infty$ constraints on the parameters (and the example features). On the positive side, under $L_1$ constraints on the parameters, there exist Bayesian algorithms that can achieve regret that is sub-linear in $d$ for the asymptotically larger values of $d$. For $L_2$ constraints, it is shown that for large enough $d$, the regret remains linear in $d$ but no longer logarithmic in $T$. Adapting the emph{redundancy-capacity/} theorem from information theory, we demonstrate a principled methodology based on grids of parameters to derive lower bounds. Grids are also utilized to derive some upper bounds. Our results strengthen results by Kakade and Ng (2005) and Foster et al. (2018) for upper bounds for this problem, introduce novel lower bounds, and adapt a methodology that can be used to obtain such bounds for other related problems. They also give a novel characterization of the asymptotic behavior when the dimension of the parameter space is allowed to grow with $T$. They additionally strengthen connections to the information theory literature, demonstrating that the actual regret for logistic regression depends on the richness of the parameter class, where even within this problem, richer classes lead to greater regret.
机译:我们解决了在线逻辑回归可实现的遗憾率的问题。我们在参数值上获得低于$ l_1 $,$ l_2 $和$ l_ infty $约束的对数后悔的下限。界限由$ d / 2 log t $占主导地位,其中$ t $是地平线,$ d $是参数空间的维度。我们以贝叶斯方法的所有这些案例显示其可实现性,以贝叶斯方法为所有这些案例,这使得它们最多可达$ d / 2 log d $术语。有趣的不同行为显示更大的维度。具体而言,在负面,如果$ d = oomega( sqrt {t})$,则任何算法都是保证遗憾的$ omega(d log t)$(大于$ theta( sqrt {t} )$)在$ l_ infty $ constraints下参数(和示例功能)。在积极的方面,在$ L_1 $约束下参数,存在贝叶斯算法,可以实现遗憾,以便为D $的渐近值为D $的渐近值。对于$ l_2 $约束,显示出足够大的$ d $,遗憾在$ d $中保持线性,但在$ t $中不再是对数。从信息理论中调整 EMPH {冗余容量 /}定理,我们通过参数网格展示了一个原则方法,以导出下限。网格也用于衍生一些上限。我们的结果加强了Kakade和NG(2005)和Foster等人的结果。 (2018)对于这个问题的上限,引入新的下限,并适应可用于获得其他相关问题的界限的方法。当允许参数空间的尺寸以$ T $生长时,它们还给出了渐近行为的新颖表征。它们还加强了与信息理论文学的联系,表明对逻辑回归的实际遗憾取决于参数类的丰富性,即使在这个问题内,更丰富的课程导致更大的遗憾。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号