首页> 外文期刊>IEEE Transactions on Automatic Control >Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function
【24h】

Randomized Gradient-Free Distributed Optimization Methods for a Multiagent System With Unknown Cost Function

机译:具有未知成本函数的多算系统的随机渐变分布式优化方法

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper proposes a randomized gradient-free distributed optimization algorithm to solve a multiagent optimization problem with set constraints. Random gradient-free oracle instead of the true gradient information is built locally such that the estimated gradient information is utilized in guiding the update of decision variables. Thus, the algorithm requires no explicit expressions but only local measurements of the cost functions. The row-stochastic and column-stochastic matrices are used as the weighting matrices during the communication with neighbors, making the algorithm convenient to implement in directed graphs as compared with the doubly stochastic weighting matrix. Without the true gradient information, we establish asymptotic convergence to the approximated optimal solution, where the optimality gap can be set arbitrarily small. Moreover, it is shown that the proposed algorithm achieves the same rate of convergence $O(ln t/sqrt{t})$ as the state-of-the-art gradient-based methods with similar settings, but having the advantages of less required information and more practical communication topologies.
机译:本文提出了一种随机渐变的分布式优化优化算法,解决了集合约束的多验证优化问题。在本地建立随机梯度无甲骨文而不是真正的梯度信息,使得利用估计的梯度信息引导决策变量的更新。因此,该算法不需要显式表达式,而是仅局部测量成本函数。与邻居通信期间,使用行 - 随机和列 - 随机矩阵用作加权矩阵,与双随机加权矩阵相比,使算法方便地在有向图中实现。在没有真正的梯度信息的情况下,我们建立渐近收敛到近似的最佳解决方案,其中最优差距可以任意小。此外,结果表明,所提出的算法实现了与具有类似设置的最先进的基于梯度的方法的收敛速率( ln t / sqrt {t}),但具有优点较少所需的信息和更实用的通信拓扑。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号