首页> 外文期刊>Control Theory & Applications, IET >Robust decentralised mean field control in leader following multi-agent systems
【24h】

Robust decentralised mean field control in leader following multi-agent systems

机译:多主体系统中领导者的强大分散平均场控制

获取原文
获取原文并翻译 | 示例
           

摘要

This study addresses a robust counterpart of the deterministic mean field control in a multi-agent system. A decentralised mean field algorithm is proposed to solve a min-max control problem for a large population of heterogeneous agents. In the proposed leader following scheme, the leader tracks a reference signal which is unknown to the followers and each follower tracks a convex combination of the population state average and the leader's state. The leader plays a robust min-max game against disturbance and the followers play a mean field ε-Nash game against each other and at the same time, each follower plays a robust min-max game against the disturbance. For all the players, finite horizon quadratic cost is considered. In the proposed decentralised algorithm, followers do not need the knowledge about each of leader's and other followers' states and they only use an estimate of the population state average. In this way, propose a policy iteration method which guarantees the convergence to the saddle point mean field ε-Nash solution. The proposed method is applied to a large population of agents and compared with centralised algorithm to demonstrate the results.
机译:这项研究解决了多智能体系统中确定性平均场控制的强大功能。提出了一种分散均值场算法来解决大量异构代理的最小-最大控制问题。在提出的领导者跟随方案中,领导者跟踪跟随者未知的参考信号,并且每个跟随者跟踪人口状态平均值和领导者状态的凸组合。领导者针对干扰进行鲁棒的最小-最大博弈,而追随者则针对彼此进行平均场ε-纳什博弈,同时,每个跟随者针对干扰都进行鲁棒的最小-最大博弈。对于所有参与者,均考虑了有限水平二次方成本。在提出的分散算法中,关注者不需要有关领导者状态和其他关注者状态的知识,而仅使用人口状态平均值的估计值。通过这种方式,提出了一种策略迭代方法,该方法可以保证收敛到鞍点平均场ε-Nash解。所提出的方法适用于大量的代理,并与集中式算法进行比较以证明结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号