首页> 外文会议>Conference on Neural Information Processing Systems >Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games
【24h】

Policy Optimization Provably Converges to Nash Equilibria in Zero-Sum Linear Quadratic Games

机译:策略优化可否在零和线性二次游戏中纳入纳什均衡融合

获取原文

摘要

We study the global convergence of policy optimization for finding the Nash equilibria (NE) in zero-sum linear quadratic (LQ) games. To this end, we first investigate the landscape of LQ games, viewing it as a nonconvex-nonconcave saddle-point problem in the policy space. Specifically, we show that despite its nonconvexity and nonconcavity, zero-sum LQ games have the property that the stationary point of the objective function with respect to the linear feedback control policies constitutes the NE of the game. Building upon this, we develop three projected nested-gradient methods that are guaranteed to converge to the NE of the game. Moreover, we show that all these algorithms enjoy both globally sublinear and locally linear convergence rates. Simulation results are also provided to illustrate the satisfactory convergence properties of the algorithms. To the best of our knowledge, this work appears to be the first one to investigate the optimization landscape of LQ games, and provably show the convergence of policy optimization methods to the NE. Our work serves as an initial step toward understanding the theoretical aspects of policy-based reinforcement learning algorithms for zero-sum Markov games in general.
机译:我们研究了在零和线性二次(LQ)游戏中找到了NASH均衡(NE)的政策优化的全球融合。为此,我们首先调查LQ游戏的景观,将其视为策略空间中的非谐波 - 非传播马鞍点问题。具体而言,我们表明,尽管它的非凸起和非扫护性,但零和LQ游戏具有目标函数相对于线性反馈控制策略的静止点构成游戏的NE。建立在此处,我们开发了三种预定的嵌套梯度方法,保证将其融合到游戏的NE。此外,我们表明所有这些算法都享有全局载位和局部线性收敛速率。还提供了模拟结果以说明算法的令人满意的收敛性。据我们所知,这项工作似乎是第一个调查LQ游戏的优化景观的作品,并证明了对NE的政策优化方法的融合。我们的工作是了解零汇率Markov游戏的基于策略的强化学习算法的理论方面的最初步骤。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号