Decentralized Control of Multi-Agent Systems using Local Density Feedback

Shiba Biswal, Karthik Elamvazhuthi, Spring Berman

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


In this paper, we stabilize a discrete-time Markov process evolving on a compact subset of <formula><tex>$\mathbb{R}^d$</tex></formula> to an arbitrary target distribution that has an <formula><tex>$L^\infty$</tex></formula> density and does not necessarily have a connected support on the state space. We address this problem by stabilizing the corresponding Kolmogorov forward equation, the \textit{mean-field model} of the system, using a density-dependent transition kernel as the control parameter. Our main application of interest is controlling the distribution of a multi-agent system in which each agent evolves according to this discrete-time Markov process. To prevent agent state transitions at the equilibrium distribution, which would potentially waste energy, we show that the Markov process can be constructed in such a way that the operator that pushes forward measures is the identity at the target distribution. In order to achieve this, the transition kernel is defined as a function of the current agent distribution, resulting in a nonlinear Markov process. Moreover, we design the transition kernel to be \textit{decentralized} in the sense that it depends only on the local density measured by each agent.

Original languageEnglish (US)
JournalIEEE Transactions on Automatic Control
StateAccepted/In press - 2021


  • Density measurement
  • Kernel
  • Markov processes
  • Mathematical model
  • Power system dynamics
  • Sociology
  • Statistics

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Decentralized Control of Multi-Agent Systems using Local Density Feedback'. Together they form a unique fingerprint.

Cite this