## Abstract

We discuss synchronous and asynchronous iterations of the form x^{k+1} = x^{k} + γ(k)(h(x^{k}) + w^{k}), where h is a suitable map and {w^{k}} is a deterministic or stochastic sequence satisfying suitable conditions. In particular, in the stochastic case, these are stochastic approximation iterations that can be analyzed using the ODE approach based either on Kushner and Clark's lemma for the synchronous case or on Borkar's theorem for the asynchronous case. However, the analysis requires that the iterates {x^{k}} be bounded, a fact which is usually hard to prove. We develop a novel framework for proving boundedness in the deterministic framework, which is also applicable to the stochastic case when the deterministic hypotheses can be verified in the almost sure sense. This is based on scaling ideas and on the properties of Lyapunov functions. We then combine the boundedness property with Borkar's stability analysis of ODEs involving nonexpansive mappings to prove convergence (with probability 1 in the stochastic case). We also apply our convergence analysis to Q-learning algorithms for stochastic shortest path problems and are able to relax some of the assumptions of the currently available results.

Original language | English (US) |
---|---|

Pages (from-to) | 1-22 |

Number of pages | 22 |

Journal | SIAM Journal on Control and Optimization |

Volume | 41 |

Issue number | 1 |

DOIs | |

State | Published - 2003 |

Externally published | Yes |

## Keywords

- Neuro-dynamic programming
- Q-learning
- Stochastic approximation

## ASJC Scopus subject areas

- Control and Optimization
- Applied Mathematics