∊-Optimal Discretized Linear Reward-Penalty Learning Automata
In this paper we consider variable structure stochastic automata (VSSA), which interact with an environment and which dynamically learns the optimal action that the environment offers. Like all VSSA the automata are fully defined by a set of action probability updating rules , 19], . However, to minimize the requirements on the random number generator used to implement the VSSA, and to increase the speed of convergence of the automaton, we consider the case in which the probability updating functions can assume only a finite number of values. These values discretize the probability space [0,1] and hence they are called discretized learning automata. The discretized automata are linear because the subintervals of [0,1] are of equal length. We shall prove the following results: a) two-action discretized linear reward-penalty automata are ergodic and ∊-optimal in all environments whose minimum penalty probability is less than 0.5; b) there exist discretized two-action linear reward-penalty automata that are ergodic and ∊-optimal in all random environments; and c) discretized two-action linear reward-penalty automata with artificially created absorbing barriers are ∊-optimal in all random environments. Apart from the above theoretical results simulation results will be presented that indicate the properties of the automata discussed. The rate of convergence of all these automata and some open problems are also presented.
|Journal||IEEE Transactions on Systems, Man and Cybernetics|
Oommen, J, & Christensen, J.P.R (J. P.R.). (1988). ∊-Optimal Discretized Linear Reward-Penalty Learning Automata. IEEE Transactions on Systems, Man and Cybernetics, 18(3), 451–458. doi:10.1109/21.7494