TY - GEN
T1 - Learning the Structure of Large Networked Systems Obeying Conservation Laws
AU - Rayas, Anirudh
AU - Anguluri, Rajasekhar
AU - Dasarathy, Gautam
N1 - Funding Information:
⇤This work was supported in part by the National Science Foundation (NSF) under the grants CCF-2048223 and OAC-1934766, and by the National Institutes of Health (NIH) under the grant 1R01GM140468-01.
Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Many networked systems such as electric networks, the brain, and social networks of opinion dynamics are known to obey conservation laws. Examples of this phenomenon include the Kirchoff laws in electric networks and opinion consensus in social networks. Conservation laws in networked systems are modeled as balance equations of the form X = B*Y, where the sparsity pattern of B* 2 Rp×p captures the connectivity of the network on p nodes, and Y, X 2 Rp are vectors of potentials and injected flows at the nodes respectively. The node potentials Y cause flows across edges which aim to balance out the potential difference, and the flows X injected at the nodes are extraneous to the network dynamics. In several practical systems, the network structure is often unknown and needs to be estimated from data to facilitate modeling, management, and control. To this end, one has access to samples of the node potentials Y, but only the statistics of the node injections X. Motivated by this important problem, we study the estimation of the sparsity structure of the matrix B* from n samples of Y under the assumption that the node injections X follow a Gaussian distribution with a known covariance ΣX. We propose a new '1-regularized maximum likelihood estimator for tackling this problem in the high-dimensional regime where the size of the network may be vastly larger than the number of samples n. We show that this optimization problem is convex in the objective and admits a unique solution. Under a new mutual incoherence condition, we establish sufficient conditions on the triple (n, p, d) for which exact sparsity recovery of B* is possible with high probability; and d is the degree of the underlying graph. We also establish guarantees for the recovery of B* in the element-wise maximum, Frobenius, and operator norms. Finally, we complement our theoretical results with experimental validation of the performance of the proposed estimator on synthetic and real-world data.
AB - Many networked systems such as electric networks, the brain, and social networks of opinion dynamics are known to obey conservation laws. Examples of this phenomenon include the Kirchoff laws in electric networks and opinion consensus in social networks. Conservation laws in networked systems are modeled as balance equations of the form X = B*Y, where the sparsity pattern of B* 2 Rp×p captures the connectivity of the network on p nodes, and Y, X 2 Rp are vectors of potentials and injected flows at the nodes respectively. The node potentials Y cause flows across edges which aim to balance out the potential difference, and the flows X injected at the nodes are extraneous to the network dynamics. In several practical systems, the network structure is often unknown and needs to be estimated from data to facilitate modeling, management, and control. To this end, one has access to samples of the node potentials Y, but only the statistics of the node injections X. Motivated by this important problem, we study the estimation of the sparsity structure of the matrix B* from n samples of Y under the assumption that the node injections X follow a Gaussian distribution with a known covariance ΣX. We propose a new '1-regularized maximum likelihood estimator for tackling this problem in the high-dimensional regime where the size of the network may be vastly larger than the number of samples n. We show that this optimization problem is convex in the objective and admits a unique solution. Under a new mutual incoherence condition, we establish sufficient conditions on the triple (n, p, d) for which exact sparsity recovery of B* is possible with high probability; and d is the degree of the underlying graph. We also establish guarantees for the recovery of B* in the element-wise maximum, Frobenius, and operator norms. Finally, we complement our theoretical results with experimental validation of the performance of the proposed estimator on synthetic and real-world data.
UR - http://www.scopus.com/inward/record.url?scp=85163160070&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85163160070&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85163160070
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -