We consider mean field Markov decision processes with a major player and a large number of minor players which have their individual objectives. The players have decoupled state transition laws and are coupled by the costs via the state distribution of the minor players. We introduce a stochastic difference equation to model the update of the limiting state distribution process and solve limiting Markov decision problems for the major player and minor players using local information. Under a solvability assumption of the consistent mean field approximation, the obtained decentralized strategies are stationary and have an ε-Nash equilibrium property.

Additional Metadata
Keywords finite states, major player, mean field game, minor player
Persistent URL dx.doi.org/10.1007/978-3-642-35582-0_11
Citation
Huang, M. (2012). Mean field stochastic games with discrete states and mixed players. doi:10.1007/978-3-642-35582-0_11