Fast Convergence Rates for Distributed Non-Bayesian Learning

Angelia Nedich, Alex Olshevsky, Cesar A. Uribe

Research output: Contribution to journalArticlepeer-review

123 Scopus citations


We consider the problem of distributed learning, where a network of agents collectively aim to agree on a hypothesis that best explains a set of distributed observations of conditionally independent random processes. We propose a distributed algorithm and establish consistency, as well as a nonasymptotic, explicit, and geometric convergence rate for the concentration of the beliefs around the set of optimal hypotheses. Additionally, if the agents interact over static networks, we provide an improved learning protocol with better scalability with respect to the number of nodes in the network.

Original languageEnglish (US)
Article number7891016
Pages (from-to)5538-5553
Number of pages16
JournalIEEE Transactions on Automatic Control
Issue number11
StatePublished - Nov 2017


  • Algorithm design and analysis
  • Bayes methods
  • distributed algorithms
  • estimation
  • learning

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Fast Convergence Rates for Distributed Non-Bayesian Learning'. Together they form a unique fingerprint.

Cite this