Learning algorithms for Markov decision processes with average cost

Jinane Abounadi, Dimitrib Bertsekas, V. S. Borkar

Research output: Contribution to journalArticlepeer-review

123 Scopus citations


This paper gives the first rigorous convergence analysis of analogues of Watkins's Q-learning algorithm, applied to average cost control of finite-state Markov chains. We discuss two algorithms which may be viewed as stochastic approximation counterparts of two existing algorithms for recursively computing the value function of the average cost problem - the traditional relative value iteration (RVI) algorithm and a recent algorithm of Bertsekas based on the stochastic shortest path (SSP) formulation of the problem. Both synchronous and asynchronous implementations are considered and analyzed using the ODE method. This involves establishing asymptotic stability of associated ODE limits. The SSP algorithm also uses ideas from two-time-scale stochastic approximation.

Original languageEnglish (US)
Pages (from-to)681-698
Number of pages18
JournalSIAM Journal on Control and Optimization
Issue number3
StatePublished - 2002
Externally publishedYes


  • Average cost control
  • Controlled Markov chains
  • Dynamic programming
  • Q-learning
  • Simulation-based algorithms
  • Stochastic approximation

ASJC Scopus subject areas

  • Control and Optimization
  • Applied Mathematics


Dive into the research topics of 'Learning algorithms for Markov decision processes with average cost'. Together they form a unique fingerprint.

Cite this