首页> 外文会议>International Conference on Machine Learning >Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance
【24h】

Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

机译:通过自我协调的对数损失下的Minimax遗憾的紧张界限

获取原文

摘要

We consider the classical problem of sequential probability assignment under logarithmic loss while competing against an arbitrary, potentially nonparametric class of experts. We obtain tight bounds on the minimax regret via a new approach that exploits the self-concordance property of the logarithmic loss. We show that for any expert class with (sequential) metric entroppy O(γ~(-P)) at scale γ, the minimax regret is O(n p/(p+1)), and that this rate cannot be improved without additional assumptions on the expert class under consideration. As an application of our techniques, we resolve the minimax regret for nonparametric Lipschitz classes of experts.
机译:我们考虑对数损失下顺序概率分配的经典问题,同时竞争任意,潜在的非参数专家。 我们通过利用对数损失的自我协调性质的新方法获得Minimax的紧缩界限。 我们表明,对于尺度γ的任何专家类(顺序)度量熵O(γ〜(-P)),Minimax遗憾是O(NP /(P + 1)),并且在没有额外的情况下不能提高这种速率 正在考虑的专家课上的假设。 作为我们的技术的应用,我们解决了非参数Lipschitz类专家的Minimax遗憾。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号