The main contribution of this work is a novel machine reinforcement learning algorithm for problems where a Poissonian stochastic time delay is present in the agent's reinforcement signal. Despite the presence of the reinforcement noise, the algorithm can craft a suitable control policy for the agent's environment. The novel approach can deal with reinforcements which may be received out of order in time or may even overlap, which was not previously considered in the literature. The proposed algorithm is simulated and its performance is compared to a standard Q-learning algorithm. Through simulation, the proposed method is found to improve the performance of a learning agent in an environment with Poissonian-type stochastically delayed rewards.

Additional Metadata
Keywords Cost, Jitter, Markov Decision Process, Multiple models, Reinforcement learning, Reward, Stochastic time delay
Persistent URL dx.doi.org/10.1109/smc.2014.6974146
Conference 2014 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2014
Citation
Campbell, J.S. (Jeffrey S.), Givigi, S.N. (Sidney N.), & Schwartz, H.M. (2014). Multiple-model Q-learning for stochastic reinforcement delays. In Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics (pp. 1611–1617). doi:10.1109/smc.2014.6974146