首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium Workshops >Near-Optimal Straggler Mitigation for Distributed Gradient Methods
【24h】

Near-Optimal Straggler Mitigation for Distributed Gradient Methods

机译:分布式梯度方法的近乎最佳级体缓解

获取原文

摘要

Modern learning algorithms use gradient descent updates to train inferential models that best explain data. Scaling these approaches to massive data sizes requires proper distributed gradient descent schemes where distributed worker nodes compute partial gradients based on their partial and local data sets, and send the results to a master node where all the computations are aggregated into a full gradient and the learning model is updated. However, a major performance bottleneck that arises is that some of the worker nodes may run slow. These nodes a.k.a. stragglers can significantly slow down computation as the slowest node may dictate the overall computational time. We propose a distributed computing scheme, called Batched Coupon's Collector (BCC) to alleviate the effect of stragglers in gradient methods. We prove that our BCC scheme is robust to a near optimal number of random stragglers. We also empirically demonstrate that our proposed BCC scheme reduces the run-time by up to 85.4% over Amazon EC2 clusters when compared with other straggler mitigation strategies. We also generalize the proposed BCC scheme to minimize the completion time when implementing gradient descent-based algorithms over heterogeneous worker nodes.
机译:现代学习算法使用渐变下降更新来培训最佳解释数据的推理模型。将这些方法缩放到大规模数据大小需要适当的分布式梯度下降方案,其中分布式工作者节点基于它们的部分和本地数据集计算部分梯度,并将结果发送到主节点,其中所有计算聚合到完全梯度和学习中模型已更新。然而,出现的主要性能瓶颈是一些工人节点可能会慢慢运行。这些节点A.K.A.陷阱器可以显着减慢计算,因为节点最慢的节点可以决定整体计算时间。我们提出了一种分布式计算方案,称为批量优惠券的收集器(BCC),以减轻跨斑块在梯度方法中的效果。我们证明我们的BCC方案对近最佳随机陷阱人数的强大。我们还经验证明,与其他落后的缓解策略相比,我们所提出的BCC方案将在亚马逊EC2集群中减少85.4 %。我们还概括了所提出的BCC方案,以最小化在异构工人节点上实施基于梯度下降的算法时的完成时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号