首页> 外文会议>2012 IEEE Information Theory Workshop. >Sequential normalized maximum likelihood in log-loss prediction
【24h】

Sequential normalized maximum likelihood in log-loss prediction

机译:对数损失预测中的顺序归一化最大似然

获取原文
获取原文并翻译 | 示例

摘要

The paper considers sequential prediction of individual sequences with log loss using an exponential family of distributions. We first show that the commonly used maximum likelihood strategy is suboptimal and requires an additional assumption about boundedness of the data sequence. We then show that both problems can be be addressed by adding the currently predicted outcome to the calculation of the maximum likelihood, followed by normalization of the distribution. The strategy obtained in this way is known in the literature as the sequential normalized maximum likelihood (SNML) strategy. We show that for general exponential families, the regret is bounded by the familiar (k/2)logn and thus optimal up to O(1). We also introduce an approximation to SNML, flattened maximum likelihood, much easier to compute that SNML itself, while retaining the optimal regret under some additional assumptions. We finally discuss the relationship to the Bayes strategy with Jeffreys' prior.
机译:本文考虑使用指数分布族对具有对数损失的单个序列进行顺序预测。我们首先表明,常用的最大似然策略是次优的,并且需要关于数据序列的有界性的附加假设。然后,我们表明可以通过将当前预测的结果添加到最大似然计算中,然后对分布进行归一化来解决这两个问题。以这种方式获得的策略在文献中称为顺序归一化最大似然(SNML)策略。我们表明,对于一般的指数族,后悔受熟悉的(k / 2)logn的限制,因此最高达O(1)。我们还引入了SNML的近似值,即最大似然平坦,更容易计算SNML本身,同时在某些其他假设下保留了最佳遗憾。最后,我们与杰弗里斯(Jeffreys)讨论了贝叶斯战略的关系。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号