NeuroFabric: Hardware and ML Model Co-Design for A Priori Sparse Neural Network Training

Mihailo Isakov, Michel A. Kinsy

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Sparse Deep Neural Networks (DNN) offer a large improvement in model storage requirements, execution latency and execution throughput. DNN pruning is contingent on knowing model weights, so networks can be pruned only after training. A priori sparse neural networks have been proposed as a way to extend sparsity benefits to the training process as well. Selecting a topology a priori is also beneficial for hardware accelerator specialization, lowering power, chip area, and latency.We present NeuroFabric, a hardware-ML model co-design approach that jointly optimizes a sparse neural network topology and a hardware accelerator configuration. NeuroFabric replaces dense DNN layers with cascades of sparse layers with a specific topology. We present an efficient and data-agnostic method for sparse network topology optimization, and show that parallel butterfly networks with skip connections achieve the best accuracy independent of sparsity or depth. We also present a multi-objective optimization framework that finds a Pareto frontier of hardware-ML model configurations over six objectives: accuracy, parameter count, throughput, latency, power, and hardware area.

Original languageEnglish (US)
Title of host publicationProceedings - 2022 IEEE 40th International Conference on Computer Design, ICCD 2022
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages4
ISBN (Electronic)9781665461863
StatePublished - 2022
Event40th IEEE International Conference on Computer Design, ICCD 2022 - Olympic Valley, United States
Duration: Oct 23 2022Oct 26 2022

Publication series

NameProceedings - IEEE International Conference on Computer Design: VLSI in Computers and Processors
ISSN (Print)1063-6404


Conference40th IEEE International Conference on Computer Design, ICCD 2022
Country/TerritoryUnited States
CityOlympic Valley


  • acceleration
  • neural network
  • Sparsity
  • topology

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering


Dive into the research topics of 'NeuroFabric: Hardware and ML Model Co-Design for A Priori Sparse Neural Network Training'. Together they form a unique fingerprint.

Cite this