A new fuzzy reinforcement learning algorithm that tunes the input and the output parameters of a fuzzy logic controller is proposed in this paper. The proposed algorithm uses three fuzzy inference systems (FISs); one is used as an actor (fuzzy logic controller, FLC), and the other two FISs are used as critics. The proposed algorithm uses the residual gradient value iteration algorithm described in [4] to tune the input and the output parameters of the actor (FLC) of the learning robot. The proposed algorithm also tunes the input and the output parameters of the critics. The proposed algorithm is called the residual gradient fuzzy actor critics learning (RGFACL) algorithm. The proposed algorithm is used to learn a single pursuit-evasion differential game. Simulation results show that the performance of the proposed RGFACL algorithm outperforms the performance of the fuzzy actor critic learning (FACL) and the Q-learning fuzzy inference system (QLFIS) algorithms proposed in [3] and [7], respectively, in terms of convergence and speed of learning.

Additional Metadata
Persistent URL dx.doi.org/10.1109/CCECE.2015.7129412
Conference 2015 28th IEEE Canadian Conference on Electrical and Computer Engineering, CCECE 2015
Citation
Awheda, M.D. (Mostafa D.), & Schwartz, H.M. (2015). The residual gradient FACL algorithm for differential games. In Canadian Conference on Electrical and Computer Engineering (pp. 1006–1011). doi:10.1109/CCECE.2015.7129412