In this paper we propose a distributed dual gradient algorithm for minimizing linearly constrained separable convex problems and analyze its rate of convergence. In particular, we show that under the assumption that the Hessian of the primal objective function is bounded we have a global error bound type property for the dual problem. Using this error bound property we devise a fully distributed dual gradient scheme for which we derive global linear rate of convergence. The proposed dual gradient method is fully distributed, requiring only local information, since is based on a weighted stepsize. Our method can be applied in many applications, e.g. distributed model predictive control, network utility maximization or optimal power flow.
|Title of host publication
|2015 European Control Conference, ECC 2015
|Institute of Electrical and Electronics Engineers Inc.
|Number of pages
|Published - Nov 16 2015
|European Control Conference, ECC 2015 - Linz, Austria
Duration: Jul 15 2015 → Jul 17 2015
|European Control Conference, ECC 2015
|7/15/15 → 7/17/15
ASJC Scopus subject areas
- Control and Systems Engineering