...
首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks
【24h】

Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks

机译:时变网络上用于多主体优化的随机无梯度方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent’s objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.
机译:在本摘要中,我们考虑了网络上的多主体优化,其中多个主体尝试根据凸状态约束集最小化不平滑但Lipschitz连续函数的总和。基础网络拓扑被建模为随时间变化。我们提出了一种随机无导数方法,其中在每次更新中,均使用随机无梯度预言机代替子梯度(SG)。与现有工作相反,我们不要求代理能够计算其目标功能的SG。根据每个代理人目标函数的平滑参数和Lipschitz常数,我们将方法收敛到误差水平内的多代理人优化问题的近似解。最后,通过数值算例说明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号