首页> 外文OA文献 >An accelerated communication-efficient primal-dual optimization framework for structured machine learning
【2h】

An accelerated communication-efficient primal-dual optimization framework for structured machine learning

机译:用于结构化机器学习的加速通信有效的原始 - 双重优化框架

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Distributed optimization algorithms are essential for training machinelearning models on very large-scale datasets. However, they often suffer fromcommunication bottlenecks. Confronting this issue, a communication-efficientprimal-dual coordinate ascent framework (CoCoA) and its improved variant CoCoA+have been proposed, achieving a convergence rate of $mathcal{O}(1/t)$ forsolving empirical risk minimization problems with Lipschitz continuous losses.In this paper, an accelerated variant of CoCoA+ is proposed and shown topossess a convergence rate of $mathcal{O}(1/t^2)$ in terms of reducingsuboptimality. The analysis of this rate is also notable in that theconvergence rate bounds involve constants that, except in extreme cases, aresignificantly reduced compared to those previously provided for CoCoA+. Theresults of numerical experiments are provided to show that acceleration canlead to significant performance gains.
机译:分布式优化算法对于在非常大规模数据集上培训机械学习模型至关重要。然而,他们经常遭受通信瓶颈。面对这个问题,已经提出了一种通信 - 高效的 - 双坐标上升框架(Cocoa)及其改进的变体Cocoa +,实现了$ Mathcal {O}(1 / T)$的收敛速度,使丽脂雪茄的经验风险最小化问题连续损失。在本文中,提出了一种加速变体,并显示了ToTossess在Reducingsup.proptimality方面的$ Mathcal {O}(1 / T ^ 2)美元的收敛速度。这种速率的分析也是值得注意的,因为该率涉及常数,除了极端情况外,与先前为可可+的那些相比,仍然显着减少。提供了数值实验的结果,以表明加速持续到显着性能的增益。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号