Learning Automata (LA) can be reckoned to be the founding algorithms on which the field of Reinforcement Learning has been built. Among the families of LA, Estimator Algorithms (EAs) are certainly the fastest, and of these, the family of discretized algorithms are proven to converge even faster than their continuous counterparts. However, it has recently been reported that the previous proofs for ε-optimality for all the reported algorithms for the past three decades have been flawed. We applaud the researchers who discovered this flaw, and who further proceeded to rectify the proof for the Continuous Pursuit Algorithm (CPA). The latter proof examines the monotonicity property of the probability of selecting the optimal action, and requires the learning parameter to be continuously changing. In this paper, we provide a new method to prove the ε-optimality of the Discretized Pursuit Algorithm (DPA) which does not require this constraint, by virtue of the fact that the DPA has, in and of itself, absorbing barriers to which the LA can jump in a discretized manner. Unlike the proof given (Zhang et al., Appl Intell 41:974–985, 3) for an absorbing version of the CPA, which utilizes the single-action Hoeffding’s inequality, the current proof invokes what we shall refer to as the “multi-action” version of the Hoeffding’s inequality. We believe that our proof is both unique and pioneering. It can also form the basis for formally showing the ε-optimality of the other EAs that possess absorbing states.

Additional Metadata
Keywords Convergence, DPA, Learning automata, Machine learning, Pursuit algorithms, ε-optimality
Persistent URL dx.doi.org/10.1007/s10489-015-0670-1
Journal Applied Intelligence
Zhang, X. (Xuan), Oommen, J, Granmo, O.-C. (Ole-Christoffer), & Jiao, L. (Lei). (2016). A formal proof of the ε-optimality of discretized pursuit algorithms. Applied Intelligence, 44(2), 282–294. doi:10.1007/s10489-015-0670-1