首页> 外国专利> A computation-efficient distributed algorithm for convex constrained optimization problem

A computation-efficient distributed algorithm for convex constrained optimization problem

机译:凸约束优化问题的一种高效计算分布式算法

摘要

#$%^&*AU2020101237A420200806.pdf#####Abstract A computation-efficient distributed algorithm based on the stochastic gradient projection method for solving the problem of convex constrained optimization with a sum of smooth convex functions and non-smooth regularization terms subject to locally general constraints is proposed over an undirected network. The algorithm mainly includes five parts: Variable initialization, parameter definition and selection, information exchange, stochastic gradient computation, and variable update. By adopting the variance reduction technique, the proposed algorithm which is set forth in the present invention reduces the amount of computation in comparison with related works. Presuming the smoothness and strong convexity cost functions, the proposed algorithm can find the exact optimal solution in expectation in the event that the constant step-size is less than an explicitly estimated upper bound. With respect to the existing distributed schemes, the proposed algorithm is more convenient for general constrained optimization problems and possesses low computation complexity in terms of the total number of local gradient evaluations. The present invention has broad application in the modern large-scale information processing problems in machine learning (The samples of a training dataset are randomly distributed across multiple computing nodes and each of the smooth objective functions is further considered as the average of several constituent functions).1/2 Start Each node sets k=O and maximum number of iteration, kmax Each node initializes local variables and selects fixed step-size as well as fixed tunable parameter according to the network and the problem Each node computes the stochastic gradient according to the variance reduction technique Each node updates the main variable according to the stochastic gradient projection method Each node updates the Lagrangian multipliers according to the main variable and the projection operator Each node sets k=k+l No k Akax Yes End Figure 1
机译:#$%^&* AU2020101237A420200806.pdf #####抽象基于随机梯度投影的高效计算分布式算法光滑和的凸约束优化问题的求解方法凸函数和非光滑正则项受局部一般约束是通过无向网络提出的。该算法主要包括五个部分:变量初始化,参数定义和选择,信息交换,随机梯度计算和变量更新。通过采用方差减少技术,在本发明中提出的所提出的算法减少了通信量与相关作品的比较。假定平滑度和强凸度成本函数,该算法可以找到期望值的精确最优解如果恒定步长小于明确估计的上限。相对于现有的分布式方案,该算法更方便用于一般约束优化问题,并且在局部梯度评估总数的项。本发明具有广泛的意义在现代大型机器学习中的信息处理问题中的应用(训练数据集的样本随机分布在多个计算节点上并且每个平滑目标函数都被进一步视为多个函数的平均值组成功能)。1/2开始每个节点设置k = O和最大值迭代次数,kmax每个节点初始化局部变量并选择固定步长以及固定可调参数网络和问题每个节点计算随机根据方差渐变还原技术每个节点更新主变量根据随机梯度投影法每个节点更新拉格朗日根据主要乘数变量和投影运算符每个节点设置k = k + 1没有

著录项

相似文献

  • 专利
  • 外文文献
  • 中文文献
获取专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号