TY - JOUR
T1 - Analog architectures for neural network acceleration based on non-volatile memory
AU - Xiao, T. Patrick
AU - Bennett, Christopher H.
AU - Feinberg, Ben
AU - Agarwal, Sapan
AU - Marinella, Matthew J.
N1 - Publisher Copyright:
© 2020 Author(s).
PY - 2020/9/1
Y1 - 2020/9/1
N2 - Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.
AB - Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.
UR - http://www.scopus.com/inward/record.url?scp=85093521043&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093521043&partnerID=8YFLogxK
U2 - 10.1063/1.5143815
DO - 10.1063/1.5143815
M3 - Review article
AN - SCOPUS:85093521043
SN - 1931-9401
VL - 7
JO - Applied Physics Reviews
JF - Applied Physics Reviews
IS - 3
M1 - 031301
ER -