This paper addresses the problem of tuning the input and the output parameters of a fuzzy logic controller. The system learns autonomously without supervision or a priori training data. Two novel techniques are proposed. The first technique combines Q(λ)-learning with function approximation (fuzzy inference system) to tune the parameters of a fuzzy logic controller operating in continuous state and action spaces. The second technique combines Q(λ)-learning with genetic algorithms to tune the parameters of a fuzzy logic controller in the discrete state and action spaces. The proposed techniques are applied to different pursuitevasion differential games. The proposed techniques are compared with the classical control strategy, Q(λ)-learning only, reward-based genetic algorithms learning, and with the technique proposed by Dai et al. (2005) [19] in which a neural network is used as a function approximation for Q-learning. Computer simulations show the usefulness of the proposed techniques.

Additional Metadata
Keywords Differential game, Function approximation, Fuzzy control, Genetic algorithms, Q(λ)-learning, Reinforcement learning
Persistent URL dx.doi.org/10.1016/j.robot.2010.09.006
Journal Robotics and Autonomous Systems
Citation
Desouky, S.F. (Sameh F.), & Schwartz, H.M. (2011). Self-learning fuzzy logic controllers for pursuitevasion differential games. Robotics and Autonomous Systems, 59(1), 22–33. doi:10.1016/j.robot.2010.09.006