TY - GEN
T1 - Survey of attacks and defenses on edge-deployed neural networks
AU - Isakov, Mihailo
AU - Gadepally, Vijay
AU - Gettings, Karen M.
AU - Kinsy, Michel A.
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/9
Y1 - 2019/9
N2 - Deep Neural Network (DNN) workloads are quickly moving from datacenters onto edge devices, for latency, privacy, or energy reasons. While datacenter networks can be protected using conventional cybersecurity measures, edge neural networks bring a host of new security challenges. Unlike classic IoT applications, edge neural networks are typically very compute and memory intensive, their execution is data-independent, and they are robust to noise and faults. Neural network models may be very expensive to develop, and can potentially reveal information about the private data they were trained on, requiring special care in distribution. The hidden states and outputs of the network can also be used in reconstructing user inputs, potentially violating users' privacy. Furthermore, neural networks are vulnerable to adversarial attacks, which may cause misclassifications and violate the integrity of the output. These properties add challenges when securing edge-deployed DNNs, requiring new considerations, threat models, priorities, and approaches in securely and privately deploying DNNs to the edge. In this work, we cover the landscape of attacks on, and defenses, of neural networks deployed in edge devices and provide a taxonomy of attacks and defenses targeting edge DNNs.
AB - Deep Neural Network (DNN) workloads are quickly moving from datacenters onto edge devices, for latency, privacy, or energy reasons. While datacenter networks can be protected using conventional cybersecurity measures, edge neural networks bring a host of new security challenges. Unlike classic IoT applications, edge neural networks are typically very compute and memory intensive, their execution is data-independent, and they are robust to noise and faults. Neural network models may be very expensive to develop, and can potentially reveal information about the private data they were trained on, requiring special care in distribution. The hidden states and outputs of the network can also be used in reconstructing user inputs, potentially violating users' privacy. Furthermore, neural networks are vulnerable to adversarial attacks, which may cause misclassifications and violate the integrity of the output. These properties add challenges when securing edge-deployed DNNs, requiring new considerations, threat models, priorities, and approaches in securely and privately deploying DNNs to the edge. In this work, we cover the landscape of attacks on, and defenses, of neural networks deployed in edge devices and provide a taxonomy of attacks and defenses targeting edge DNNs.
KW - Internet of Things
KW - Neural networks
KW - Security
UR - http://www.scopus.com/inward/record.url?scp=85076765257&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076765257&partnerID=8YFLogxK
U2 - 10.1109/HPEC.2019.8916519
DO - 10.1109/HPEC.2019.8916519
M3 - Conference contribution
AN - SCOPUS:85076765257
T3 - 2019 IEEE High Performance Extreme Computing Conference, HPEC 2019
BT - 2019 IEEE High Performance Extreme Computing Conference, HPEC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE High Performance Extreme Computing Conference, HPEC 2019
Y2 - 24 September 2019 through 26 September 2019
ER -