Distributed asynchronous incremental subgradient methods

A. Nedić, D. P. Bertsekas, V. S. Borkar

Research output: Contribution to journalArticlepeer-review

87 Scopus citations


We propose and analyze a distributed asynchronous subgradient method for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to distribute the computation of the component subgradients among a set of processors, which communicate only with a coordinator. The coordinator performs the subgradient iteration incrementally and asynchronously, by taking steps along the subgradients of the component funtions that are available at the update time. The incremental approach has performed very well in centralized computation, and the parallel implementation should improve its performance substantially, particularly for typical problems where computation of the component subgradients is relatively costly.

Original languageEnglish (US)
Pages (from-to)381-407
Number of pages27
JournalStudies in Computational Mathematics
Issue numberC
StatePublished - 2001
Externally publishedYes

ASJC Scopus subject areas

  • Computational Mathematics


Dive into the research topics of 'Distributed asynchronous incremental subgradient methods'. Together they form a unique fingerprint.

Cite this