首页> 外文会议>International Conference on Algorithmic Learning Theory >Re-adapting the Regularization of Weights for Non-stationary Regression
【24h】

Re-adapting the Regularization of Weights for Non-stationary Regression

机译:重新调整非稳定性回归权重的正则化

获取原文

摘要

The goal of a learner in standard online learning is to have the cumulative loss not much larger compared with the best-performing prediction-function from some fixed class. Numerous algorithms were shown to have this gap arbitrarily close to zero compared with the best function that is chosen off-line. Nevertheless, many real-world applications (such as adaptive filtering) are non-stationary in nature and the best prediction function may not be fixed but drift over time. We introduce a new algorithm for regression that uses per-feature-learning rate and provide a regret bound with respect to the best sequence of functions with drift. We show that as long as the cumulative drift is sub-linear in the length of the sequence our algorithm suffers a regret that is sub-linear as well. We also sketch an algorithm that achieves the best of the two worlds: in the stationary settings has log(T) regret, while in the non-stationary settings has sub-linear regret. Simulations demonstrate the usefulness of our algorithm compared with other state-of-the-art approaches.
机译:在标准在线学习中的学习者的目标是与来自某些固定类的最佳预测功能相比,累积损失并没有更大。与偏外选择的最佳功能相比,显示了许多算法使该差距随此接近零。然而,许多真实世界的应用(例如自适应滤波)本质上是非静止的,并且最好的预测功能可能不是固定而是随时间漂移。我们介绍了一种使用每个特征学习率的回归算法,并为漂移的最佳功能序列提供遗憾。我们表明,只要累积漂移在序列的长度中是亚线性,我们的算法也遭受了次线性的遗憾。我们还绘制了一种实现两个世界中最好的算法:在静止设置中有日志(t)后悔,而在非静止设置中具有子线性遗憾。仿真展示了我们算法与其他最先进的方法相比的有用性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号