Partition SRAM and RRAM based synaptic arrays for neuro-inspired computing

Pai Yu Chen, Shimeng Yu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Scopus citations


Memory array architectures have been proposed for on-chip acceleration of weighted sum and weight update in the neuro-inspired machine learning algorithms. As the learning algorithms usually operate on a large weight matrix size, an efficient mapping of a large weight matrix on the hardware accelerator may require partitioning the matrix into multiple sub-arrays. In this work, we built a circuit-level macro simulator to evaluate the performance of partitioning a 512×512 weight matrix into the SRAM and RRAM based accelerators. Generally, with more partitioning and finer granularity of the array architecture, the read/write latency and the dynamic read/write energy will decrease due to an increased computation parallelism at the expense of larger area and leakage power, as shown in the case of the SRAM accelerator. However, the RRAM accelerator does not improve the read latency and read energy beyond a certain partition point because the overhead due to multiple intermediate stages of adders and registers will dominate.

Original languageEnglish (US)
Title of host publicationISCAS 2016 - IEEE International Symposium on Circuits and Systems
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages4
ISBN (Electronic)9781479953400
StatePublished - Jul 29 2016
Event2016 IEEE International Symposium on Circuits and Systems, ISCAS 2016 - Montreal, Canada
Duration: May 22 2016May 25 2016


Other2016 IEEE International Symposium on Circuits and Systems, ISCAS 2016


  • granularity
  • hardware acceleration
  • neuromorphic computing
  • partition
  • RRAM
  • SRAM

ASJC Scopus subject areas

  • Electrical and Electronic Engineering


Dive into the research topics of 'Partition SRAM and RRAM based synaptic arrays for neuro-inspired computing'. Together they form a unique fingerprint.

Cite this