TY - GEN
T1 - Towards efficient neural networks on-a-chip
T2 - 2019 China Semiconductor Technology International Conference, CSTIC 2019
AU - Du, Xiaocong
AU - Krishnan, Gokul
AU - Mohanty, Abinash
AU - Li, Zheng
AU - Charan, Gouranga
AU - Cao, Yu
N1 - Funding Information:
This work was supported in part by the Semiconductor Research Corporation (SRC) and DARPA.
Publisher Copyright:
© 2019 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - Machine learning algorithms have made significant advances in many applications. However, their hardware implementation on the state-of-the-art platforms still faces several challenges and are limited by various factors, such as memory volume, memory bandwidth and interconnection overhead. The adoption of the crossbar architecture with emerging memory technology partially solves the problem but induces process variation and other concerns. In this paper, we will present novel solutions to two fundamental issues in crossbar implementation of Artificial Intelligence (AI) algorithms: device variation and insufficient interconnections. These solutions are inspired by the statistical properties of algorithms themselves, especially the redundancy in neural network nodes and connections. By Random Sparse Adaptation and pruning the connections following the Small-World model, we demonstrate robust and efficient performance on representative datasets such as MNIST and CIFAR-10. Moreover, we present Continuous Growth and Pruning algorithm for future learning and adaptation on hardware.
AB - Machine learning algorithms have made significant advances in many applications. However, their hardware implementation on the state-of-the-art platforms still faces several challenges and are limited by various factors, such as memory volume, memory bandwidth and interconnection overhead. The adoption of the crossbar architecture with emerging memory technology partially solves the problem but induces process variation and other concerns. In this paper, we will present novel solutions to two fundamental issues in crossbar implementation of Artificial Intelligence (AI) algorithms: device variation and insufficient interconnections. These solutions are inspired by the statistical properties of algorithms themselves, especially the redundancy in neural network nodes and connections. By Random Sparse Adaptation and pruning the connections following the Small-World model, we demonstrate robust and efficient performance on representative datasets such as MNIST and CIFAR-10. Moreover, we present Continuous Growth and Pruning algorithm for future learning and adaptation on hardware.
UR - http://www.scopus.com/inward/record.url?scp=85069499030&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85069499030&partnerID=8YFLogxK
U2 - 10.1109/CSTIC.2019.8755608
DO - 10.1109/CSTIC.2019.8755608
M3 - Conference contribution
AN - SCOPUS:85069499030
T3 - China Semiconductor Technology International Conference 2019, CSTIC 2019
BT - China Semiconductor Technology International Conference 2019, CSTIC 2019
A2 - Claeys, Cor
A2 - Huang, Ru
A2 - Wu, Hanming
A2 - Lin, Qinghuang
A2 - Liang, Steve
A2 - Song, Peilin
A2 - Guo, Zhen
A2 - Lai, Kafai
A2 - Zhang, Ying
A2 - Qu, Xinping
A2 - Lung, Hsiang-Lan
A2 - Yu, Wenjian
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 March 2019 through 19 March 2019
ER -