In this work, we propose a new fuzzy reinforcement learning algorithm for differential games that have continuous state and action spaces. The proposed algorithm uses function approximation systems whose parameters are updated differently from the updating mechanisms used in the algorithms proposed in the literature. Unlike the algorithms presented in the literature which use the direct algorithms to update the parameters of their function approximation systems, the proposed algorithm uses the residual gradient value iteration algorithm to tune the input and output parameters of its function approximation systems. It has been shown in the literature that the direct algorithms may not converge to an answer in some cases, while the residual gradient algorithms are always guaranteed to converge to a local minimum. The proposed algorithm is called the residual gradient fuzzy actor–critic learning (RGFACL) algorithm. The proposed algorithm is used to learn three different pursuit–evasion differential games. Simulation results show that the performance of the proposed RGFACL algorithm outperforms the performance of the fuzzy actor–critic learning and the Q-learning fuzzy inference system algorithms in terms of convergence and speed of learning.

Additional Metadata
Keywords Fuzzy control, Pursuit–evasion differential games, Reinforcement learning, Residual gradient algorithms
Persistent URL dx.doi.org/10.1007/s40815-016-0284-8
Journal International Journal of Fuzzy Systems
Citation
Awheda, M.D. (Mostafa D.), & Schwartz, H.M. (2017). A Residual Gradient Fuzzy Reinforcement Learning Algorithm for Differential Games. International Journal of Fuzzy Systems, 19(4), 1058–1076. doi:10.1007/s40815-016-0284-8