TY - GEN
T1 - Mean-Field Stabilization of Robotic Swarms to Probability Distributions with Disconnected Supports
AU - Elamvazhuthi, Karthik
AU - Biswal, Shiba
AU - Berman, Spring
N1 - Funding Information:
This work was supported by National Science Foundation (NSF) Award CMMI-1436960 and by ONR Young Investigator Award N00014-16-1-2605.
Publisher Copyright:
© 2018 AACC.
PY - 2018/8/9
Y1 - 2018/8/9
N2 - We consider the problem of stabilizing a swarm of agents to a target probability distribution among a set of states, given that the agents' states evolve according to an interacting system of continuous time Markov chains (CTMCs). We construct a class of density-feedback laws, i.e., control laws that are functions of the swarm population density, that achieve this objective provided that the graph associated with the CTMCs is strongly connected. To execute these control laws, each agent only requires information on the population fraction of agents that are in its current state. Additionally, the control laws ensure that there are no state transitions by agents at equilibrium, which is a known drawback of stabilization using time- and density-independent control laws. We guarantee global asymptotic stability of the equilibrium distribution by analyzing the corresponding mean-field model. The fact that any probability distribution can be globally stabilized is a significant extension of previous mean-field based approaches that control swarms of agents using time-invariant control laws, which require the equilibrium distribution to have a strongly connected support. To admit feedback laws that take values only on a discrete set, we consider control laws that can be discontinuous functions of the agent densities. We validate the control laws using stochastic simulations of the CTMC model and numerical simulations of the mean-field model.
AB - We consider the problem of stabilizing a swarm of agents to a target probability distribution among a set of states, given that the agents' states evolve according to an interacting system of continuous time Markov chains (CTMCs). We construct a class of density-feedback laws, i.e., control laws that are functions of the swarm population density, that achieve this objective provided that the graph associated with the CTMCs is strongly connected. To execute these control laws, each agent only requires information on the population fraction of agents that are in its current state. Additionally, the control laws ensure that there are no state transitions by agents at equilibrium, which is a known drawback of stabilization using time- and density-independent control laws. We guarantee global asymptotic stability of the equilibrium distribution by analyzing the corresponding mean-field model. The fact that any probability distribution can be globally stabilized is a significant extension of previous mean-field based approaches that control swarms of agents using time-invariant control laws, which require the equilibrium distribution to have a strongly connected support. To admit feedback laws that take values only on a discrete set, we consider control laws that can be discontinuous functions of the agent densities. We validate the control laws using stochastic simulations of the CTMC model and numerical simulations of the mean-field model.
UR - http://www.scopus.com/inward/record.url?scp=85052601393&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052601393&partnerID=8YFLogxK
U2 - 10.23919/ACC.2018.8431780
DO - 10.23919/ACC.2018.8431780
M3 - Conference contribution
AN - SCOPUS:85052601393
SN - 9781538654286
T3 - Proceedings of the American Control Conference
SP - 885
EP - 892
BT - 2018 Annual American Control Conference, ACC 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 Annual American Control Conference, ACC 2018
Y2 - 27 June 2018 through 29 June 2018
ER -