首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Exploring Fast and Communication-Efficient Algorithms in Large-Scale Distributed Networks
【24h】

Exploring Fast and Communication-Efficient Algorithms in Large-Scale Distributed Networks

机译:在大型分布式网络中探索快速和通信高效的算法

获取原文
       

摘要

The communication overhead has become a significant bottleneck in data-parallel network with the increasing of model size and data samples. In this work, we propose a new algorithm LPC-SVRG with quantized gradients and its acceleration ALPC-SVRG to effectively reduce the communication complexity while maintaining the same convergence as the unquantized algorithms. Specifically, we formulate the heuristic gradient clipping technique within the quantization scheme and show that unbiased quantization methods in related works [3, 33, 38] are special cases of ours. We introduce double sampling in the accelerated algorithm ALPC-SVRG to fully combine the gradients of full-precision and low-precision, and then achieve acceleration with fewer communication overhead. Our analysis focuses on the nonsmooth composite problem, which makes our algorithms more general. The experiments on linear models and deep neural networks validate the effectiveness of our algorithms.
机译:随着模型大小和数据样本的增加,通信开销已成为数据并行网络中的重要瓶颈。在这项工作中,我们提出了一种具有量化梯度的新算法LPC-SVRG及其加速ALPC-SVRG,以有效地降低通信复杂性,同时保持与未调节算法相同的收敛性。具体地,我们在量化方案中制定了启发式梯度剪辑技术,并显示相关工作中的无偏的量化方法[3,33,38]是我们的特殊情况。我们在加速算法ALPC-SVRG中引入双重采样,以完全结合全精度和低精度的梯度,然后实现较少通信开销的加速度。我们的分析侧重于非现状综合问题,这使我们的算法更加通用。线性模型和深神经网络的实验验证了我们算法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号