In this paper, we consider a multi-pursuer single-superior-evader pursuit-evasion differential game where the speed of the evader is similar to the speed of each pursuer. A new fuzzy reinforcement learning algorithm is proposed in this work for this game. Each pursuer of the game uses the proposed algorithm to learn its control strategy. The proposed algorithm of each pursuer uses the residual gradient fuzzy actor critic learning (RGFACL) algorithm to tune the parameters of the fuzzy logic controller (FLC) of the pursuer. The proposed algorithm uses a formation control approach in the tuning mechanism of the FLC of the learning pursuer so that the learning pursuer or the other learning pursuers can capture the superior evader. The formation control mechanism used by the proposed algorithm guarantees that the pursuers are distributed around the superior evader in order to avoid collision between pursuers. The formation control mechanism also guarantees that the capture regions of each two adjacent pursuers overlap or at least border each other so that the capture of the superior evader will be guaranteed. The proposed algorithm is a decentralized algorithm as no communication among pursuers is required. The only information that the proposed algorithm of each learning pursuer requires is the position and the speed of the superior evader. The proposed algorithm is used to learn a multi-pursuer single-superior-evader pursuitevasion differential game. The simulation results show the effectiveness of the proposed algorithm as the superior evader is always captured by one or some of the pursuers learning their strategies by the proposed algorithm.
10th Annual International Systems Conference, SysCon 2016
Department of Systems and Computer Engineering

Awheda, M.D. (Mostafa D.), & Schwartz, H.M. (2016). Decentralized learning in pursuit-evasion differential games with multi-pursuer and single-superior evader. In 10th Annual International Systems Conference, SysCon 2016 - Proceedings. doi:10.1109/SYSCON.2016.7490516