Establishing the existence of Nash equilibria for partially observedstochastic dynamic games is known to be quite challenging, with thedifficulties stemming from the noisy nature of the measurements available toindividual players (agents) and the decentralized nature of this information.When the number of players is sufficiently large and the interactions amongagents is of the mean-field type, one way to overcome this challenge is toinvestigate the infinite-population limit of the problem, which leads to amean-field game. In this paper, we consider discrete-time partially observedmean-field games with infinite-horizon discounted cost criteria. Using thetechnique of converting the original partially observed stochastic controlproblem to a fully observed one on the belief space and the dynamic programmingprinciple, we establish the existence of Nash equilibria for these game modelsunder very mild technical conditions. Then, we show that the mean-fieldequilibrium policy, when adopted by each agent, forms an approximate Nashequilibrium for games with sufficiently many agents.
展开▼