...
首页> 外文期刊>International Journal of Robust and Nonlinear Control >Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints
【24h】

Distributed primal-dual stochastic subgradient algorithms for multi-agent optimization under inequality constraints

机译:不等式约束下用于多主体优化的分布式原对偶随机次梯度算法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

SUMMARY We consider the multi-agent optimization problem where multiple agents try to cooperatively optimize the sum of their local convex objective functions, subject to global inequality constraints and a convex constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the associated Lagrangian function, which can be evaluated with stochastic errors, we propose the distributed primal-dual stochastic subgradient algorithms for two cases: (i) the time model is synchronous and (ii) the time model is asynchronous. In the first case, we obtain bounds on the convergence properties of the algorithm for a diminishing step size. In the second case, for a constant step size, we establish some error bounds on the algorithm's performance. In particular, we prove that the error bounds scale as nn in the number of n agents.
机译:发明内容我们考虑了多智能体优化问题,其中多个智能体试图共同优化其局部凸目标函数的总和,这要受到全局不等式约束和网络上的凸约束的约束。通过将原始和对偶最优解表征为相关联的拉格朗日函数的鞍点(可以用随机误差进行评估),我们针对两种情况提出了分布式原始对偶随机次梯度算法:(i)时间模型是同步的,并且( ii)时间模型是异步的。在第一种情况下,我们在步长减小的情况下获得了算法收敛性的界限。在第二种情况下,对于恒定的步长,我们在算法性能上建立了一些误差范围。特别是,我们证明了错误边界在n个代理的数量中缩放为nn。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号