首页> 外文会议>IEEE Data Science Workshop >GNSD: a Gradient-Tracking Based Nonconvex Stochastic Algorithm for Decentralized Optimization
【24h】

GNSD: a Gradient-Tracking Based Nonconvex Stochastic Algorithm for Decentralized Optimization

机译:GNSD:一种基于梯度跟踪的分散优化的非透视随机算法

获取原文
获取外文期刊封面目录资料

摘要

In the era of big data, it is challenging to train a machine learning model on a single machine or over a distributed system with a central controller over a large-scale dataset. In this paper, we propose a gradient-tracking based nonconvex stochastic decentralized (GNSD) algorithm for solving nonconvex optimization problems, where the data is partitioned into multiple parts and processed by the local computational resource. Through exchanging the parameters at each node over a network, GNSD is able to find the first-order stationary points (FOSP) efficiently. From the theoretical analysis, it is guaranteed that the convergence rate of GNSD to FOSPs matches the well-known convergence rate O(1/√T) of stochastic gradient descent by shrinking the step-size. Finally, we perform extensive numerical experiments on computational clusters to demonstrate the advantage of GNSD compared with other state-of-the-art methods.
机译:在大数据的时代,在一个机器上或在一个大规模数据集中使用中央控制器训练机器学习模型是挑战的。在本文中,我们提出了一种基于梯度跟踪的非透视随机分散(GNSD)算法,用于解决非凸优化问题,其中数据被分成多个部分并由本地计算资源处理。通过在网络上交换每个节点的参数,GNSD能够有效地找到一阶静止点(FOSP)。从理论分析中,保证GNSD对FOSP的收敛速率与随机梯度下降的众所周知的会聚速率O(1 /√T)通过收缩阶梯尺寸来匹配随机梯度下降。最后,我们对计算群集进行广泛的数值实验,以展示与其他最先进的方法相比的GNSD的优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号