This paper presents a LR-I lagging anchor algorithm that combines a lagging anchor method to the LR-I learning algorithm. We prove that this decentralized learning algorithm converges in strategies to a Nash equilibrium in two-player, zero-sum, two-action matrix games, while only needing knowledge of their own action and reward.

Additional Metadata
Conference 2011 American Control Conference, ACC 2011
Lu, X. (Xiaosong), & Schwartz, H.M. (2011). Decentralized learning in two-player zero-sum games: A LR-I lagging anchor algorithm. In Proceedings of the American Control Conference (pp. 107–112).