TY - JOUR
T1 - GrAMME
T2 - Semisupervised Learning Using Multilayered Graph Attention Models
AU - Shanthamallu, Uday Shankar
AU - Thiagarajan, Jayaraman J.
AU - Song, Huan
AU - Spanias, Andreas
N1 - Funding Information:
ACKNOWLEDGMENT This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The authors would like to thank P. Sattigeri for useful discussions and sharing data.
Funding Information:
Manuscript received October 2, 2018; revised March 26, 2019 and August 21, 2019; accepted October 9, 2019. Date of publication November 14, 2019; date of current version October 6, 2020. This work was supported in part by the Sensor Signal and Information Processing (SenSIP) Center, Arizona State University. (Corresponding author: Uday Shankar Shanthamallu.) U. S. Shanthamallu and A. Spanias are with the Sensor Signal and Information Processing (SenSIP) Center, School of Electrical, Computer and Energy Engineering (ECEE), Arizona State University, Tempe, AZ 85287 USA (e-mail: ushantha@asu.edu).
Publisher Copyright:
© 2012 IEEE.
PY - 2020/10
Y1 - 2020/10
N2 - Modern data analysis pipelines are becoming increasingly complex due to the presence of multiview information sources. While graphs are effective in modeling complex relationships, in many scenarios, a single graph is rarely sufficient to succinctly represent all interactions, and hence, multilayered graphs have become popular. Though this leads to richer representations, extending solutions from the single-graph case is not straightforward. Consequently, there is a strong need for novel solutions to solve classical problems, such as node classification, in the multilayered case. In this article, we consider the problem of semisupervised learning with multilayered graphs. Though deep network embeddings, e.g., DeepWalk, are widely adopted for community discovery, we argue that feature learning with random node attributes, using graph neural networks, can be more effective. To this end, we propose to use attention models for effective feature learning and develop two novel architectures, GrAMME-SG and GrAMME-Fusion, that exploit the interlayer dependences for building multilayered graph embeddings. Using empirical studies on several benchmark data sets, we evaluate the proposed approaches and demonstrate significant performance improvements in comparison with the state-of-the-art network embedding strategies. The results also show that using simple random features is an effective choice, even in cases where explicit node attributes are not available.
AB - Modern data analysis pipelines are becoming increasingly complex due to the presence of multiview information sources. While graphs are effective in modeling complex relationships, in many scenarios, a single graph is rarely sufficient to succinctly represent all interactions, and hence, multilayered graphs have become popular. Though this leads to richer representations, extending solutions from the single-graph case is not straightforward. Consequently, there is a strong need for novel solutions to solve classical problems, such as node classification, in the multilayered case. In this article, we consider the problem of semisupervised learning with multilayered graphs. Though deep network embeddings, e.g., DeepWalk, are widely adopted for community discovery, we argue that feature learning with random node attributes, using graph neural networks, can be more effective. To this end, we propose to use attention models for effective feature learning and develop two novel architectures, GrAMME-SG and GrAMME-Fusion, that exploit the interlayer dependences for building multilayered graph embeddings. Using empirical studies on several benchmark data sets, we evaluate the proposed approaches and demonstrate significant performance improvements in comparison with the state-of-the-art network embedding strategies. The results also show that using simple random features is an effective choice, even in cases where explicit node attributes are not available.
KW - Attention
KW - deep learning
KW - multilayered graphs
KW - network embeddings
KW - semisupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85092680171&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092680171&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2948797
DO - 10.1109/TNNLS.2019.2948797
M3 - Article
C2 - 31725400
AN - SCOPUS:85092680171
SN - 2162-237X
VL - 31
SP - 3977
EP - 3988
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
M1 - 8901181
ER -