【24h】

Slow Learners are Fast

机译:慢学徒快

获取原文

摘要

Online learning algorithms have impressive convergence properties when it comes to risk minimization and convex games on very large problems. However, they are inherently sequential in their design which prevents them from taking advantage of modern multi-core architectures. In this paper we prove that online learning with delayed updates converges well, thereby facilitating parallel online learning.
机译:在线学习算法在最小化风险和针对非常大的问题的凸博弈方面具有令人印象深刻的收敛性。但是,它们在设计上本质上是顺序的,这使它们无法利用现代的多核体系结构。在本文中,我们证明了具有延迟更新的在线学习可以很好地收敛,从而促进并行在线学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号