首页> 外国专利> A linearly Convergent Distributed Proximal Gradient Algorithm via Edge-Based Method

A linearly Convergent Distributed Proximal Gradient Algorithm via Edge-Based Method

机译:基于边缘的线性收敛分布式近邻梯度算法

摘要

#$%^&*AU2020101008A420200716.pdf#####Abstract This patent focuses on the composite convex optimization problem with a non-smooth term over an undirected network, which is a generic model and has wide applications. For solving this problem, a distributed proximal strategy via edge-based method is proposed. The algorithm mainly consists of five parts including determining parameter; variable initialization; exchanging information; computing gradient; updating variable. By adjusting the parameters to the appropriate range, the proposed algorithm can linearly converge to the global optimal solution, which is comparable to centralized optimization methods. With applying the edge-based Laplacian matrix, the algorithm has adjustable edge weights, which means more flexible edge weight selections. The present invention has broad application in large-scale machine learning.1/3 Start Each agent sets k = 0 and the maximum number of iterations, k. Each agent initializes local variables Selecting edge weights, the step size and adjustable parameters Each agent sends the variables to its neighbours and receives the variables from its neighbours Each agent updates the variables and computes the gradient Each agent sets k = k +1 Y k kn ? N End Fig. 1
机译:#$%^&* AU2020101008A420200716.pdf #####抽象该专利关注具有非光滑项的复合凸优化问题无向网络,它是通用模型,具有广泛的应用。为了解决这个问题针对这一问题,提出了一种基于边缘的分布式近端策略。算法确定参数主要由五个部分组成;变量初始化交换信息;计算梯度;更新变量。通过将参数调整为在适当的范围内,该算法可以线性收敛到全局最优解,与集中式优化方法相当。通过应用基于边缘的拉普拉斯矩阵,该算法具有可调整的边缘权重,这意味着更灵活的边缘重量选择。本发明在大规模机器学习中具有广泛的应用。1/3开始每个代理设置k = 0和最大数量迭代次数,k。每个代理初始化局部变量选择边缘权重步长和可调参数每个代理发送邻居的变量并接收变量从邻居那里每个代理更新变量和计算渐变每个代理设置k = k +1ÿk

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号