首页> 外文会议>IEEE International Parallel and Distributed Processing Symposium >Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization
【24h】

Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization

机译:在稀疏凸优化的一阶方法中避免同步

获取原文

摘要

Parallel computing has played an important role in speeding up convex optimization methods for big data analytics and large-scale machine learning (ML). However, the scalability of these optimization methods is inhibited by the cost of communicating and synchronizing processors in a parallel setting. Iterative ML methods are particularly sensitive to communication cost since they often require communication every iteration. In this work, we extend well-known techniques from Communication-Avoiding Krylov subspace methods to first-order, block coordinate descent methods for Support Vector Machines and Proximal Least-Squares problems. Our Synchronization-Avoiding (SA) variants reduce the latency cost by a tunable factor of 's' at the expense of a factor of 's' increase in flops and bandwidth costs. We show that the SA-variants are numerically stable and can attain large speedups of up to 5.1x on a Cray XC30 supercomputer.
机译:并行计算在加快用于大数据分析和大规模机器学习(ML)的凸优化方法中发挥了重要作用。但是,这些优化方法的可伸缩性受到并行设置下通信和同步处理器成本的限制。迭代ML方法对通信成本特别敏感,因为它们通常每次迭代都需要通信。在这项工作中,我们将众所周知的技术从避免通信的Krylov子空间方法扩展到支持向量机和近邻最小二乘问题的一阶块坐标下降方法。我们的避免同步(SA)变体以可调的s因子降低了延迟成本,但以触发器和带宽成本增加s因子为代价。我们证明了SA变量在数值上是稳定的,并且在Cray XC30超级计算机上可以达到高达5.1倍的大加速比。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号