首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity
【24h】

Learning Sparse Polymatrix Games in Polynomial Time and Sample Complexity

机译:在多项式时间和样本复杂度中学习稀疏的多矩阵博弈

获取原文
       

摘要

We consider the problem of learning sparse polymatrix games from observations of strategic interactions. We show that a polynomial time method based on $ell_{1,2}$-group regularized logistic regression recovers a game, whose Nash equilibria are the $ε$-Nash equilibria of the game from which the data was generated (true game), in $O(m^4 d^4 log (pd))$ samples of strategy profiles — where $m$ is the maximum number of pure strategies of a player, $p$ is the number of players, and $d$ is the maximum degree of the game graph. Under slightly more stringent separability conditions on the payoff matrices of the true game, we show that our method learns a game with the exact same Nash equilibria as the true game. We also show that $Ω(d log (pm))$ samples are necessary for any method to consistently recover a game, with the same Nash-equilibria as the true game, from observations of strategic interactions.
机译:我们考虑从战略互动的观察中学习稀疏多矩阵博弈的问题。我们证明了基于$ ell_ {1,2} $-group正则化logistic回归的多项式时间方法恢复了一个博弈,该博弈的Nash均衡是生成数据的游戏的$ε$ -Nash均衡(真实游戏) ),在策略配置文件的$ O(m ^ 4 d ^ 4 log(pd))$个样本中–其中,$ m $是玩家的纯净策略的最大数量,$ p $是玩家的数量,$ d $是游戏图的最大程度。在对真实游戏的收益矩阵的可分离性条件稍加严格的情况下,我们证明了我们的方法学习的游戏具有与真实游戏完全相同的纳什均衡。我们还表明,从战略互动的观察结果来看,$Ω(d log(pm))$样本对于任何一种持续恢复游戏的条件都是必要的,该游戏具有与真实游戏相同的纳什均衡。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号