TY - JOUR
T1 - SIAM
T2 - Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks
AU - Krishnan, Gokul
AU - Mandal, Sumit K.
AU - Pannala, Manvitha
AU - Chakrabarti, Chaitali
AU - Seo, Jae Sun
AU - Ogras, Umit Y.
AU - Cao, Yu
N1 - Funding Information:
This article appears as part of the ESWEEK-TECS special issue and was presented in the International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), 2021. This work was supported by C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, and SRC GRC Task 3012.001. Authors’ addresses: G. Krishnan, M. Pannala, C. Chakrabarti, J.-s. Seo, and Y. Cao, Arizona State University, School of Electrical, Computer, and Energy Engineering, Tempe, AZ, 85287, USA; emails: {gkrish19, mpannal1, chaitali, jseo28, Yu.Cao} @asu.edu; S. K. Mandal and U. Y. Ogras, University of Wisconsin-Madison, Department of Electrical and Computer Engineering, Madison, WI, 53706; emails: {skmandal, uogras}@wisc.edu. Updated author affiliation: GOKUL KRISHNAN, Arizona State University, USA; SUMIT K. MANDAL, University of Wisconsin-Madison, USA; MANVITHA PANNALA, Arizona State University, USA; CHAITALI CHAKRABARTI, Arizona State University, USA; JAE-SUN SEO, Arizona State University, USA; UMIT Y. OGRAS, University of Wisconsin-Madison, USA; YU CAO, Arizona State University, USA. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. © 2021 Association for Computing Machinery. 1539-9087/2021/09-ART68 $15.00 https://doi.org/10.1145/3476999
Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/10
Y1 - 2021/10
N2 - In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes. 2.5D integration or chiplet-based architectures interconnect multiple small chips (i.e., chiplets) to form a large computing system, presenting a feasible solution beyond a monolithic IMC architecture to accelerate large deep learning models. This paper presents a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of such a paradigm shift in IMC architecture design. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to realize an end-to-end system. SIAM is scalable in its support of a wide range of deep neural networks (DNNs), customizable to various network structures and configurations, and capable of efficient design space exploration. We demonstrate the flexibility, scalability, and simulation speed of SIAM by benchmarking different state-of-the-art DNNs with CIFAR-10, CIFAR-100, and ImageNet datasets. We further calibrate the simulation results with a published silicon result, SIMBA. The chiplet-based IMC architecture obtained through SIAM shows 130 and 72 improvement in energy-efficiency for ResNet-50 on the ImageNet dataset compared to Nvidia V100 and T4 GPUs.
AB - In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes. 2.5D integration or chiplet-based architectures interconnect multiple small chips (i.e., chiplets) to form a large computing system, presenting a feasible solution beyond a monolithic IMC architecture to accelerate large deep learning models. This paper presents a new benchmarking simulator, SIAM, to evaluate the performance of chiplet-based IMC architectures and explore the potential of such a paradigm shift in IMC architecture design. SIAM integrates device, circuit, architecture, network-on-chip (NoC), network-on-package (NoP), and DRAM access models to realize an end-to-end system. SIAM is scalable in its support of a wide range of deep neural networks (DNNs), customizable to various network structures and configurations, and capable of efficient design space exploration. We demonstrate the flexibility, scalability, and simulation speed of SIAM by benchmarking different state-of-the-art DNNs with CIFAR-10, CIFAR-100, and ImageNet datasets. We further calibrate the simulation results with a published silicon result, SIMBA. The chiplet-based IMC architecture obtained through SIAM shows 130 and 72 improvement in energy-efficiency for ResNet-50 on the ImageNet dataset compared to Nvidia V100 and T4 GPUs.
KW - Chiplet architecture
KW - DNN acceleration
KW - IMC benchmarking
KW - in-memory compute
KW - network-on-chip
KW - network-on-package
UR - http://www.scopus.com/inward/record.url?scp=85115832331&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115832331&partnerID=8YFLogxK
U2 - 10.1145/3476999
DO - 10.1145/3476999
M3 - Article
AN - SCOPUS:85115832331
SN - 1539-9087
VL - 20
JO - ACM Transactions on Embedded Computing Systems
JF - ACM Transactions on Embedded Computing Systems
IS - 5s
M1 - 68
ER -