In-Memory Computing for AI Accelerators: Challenges and Solutions

Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae Sun Seo, Umit Y. Ogras, Yu Cao

Research output: Chapter in Book/Report/Conference proceedingChapter

1 Scopus citations

Abstract

In-memory computing (IMC)-based hardware reduces latency as well as energy consumption for compute-intensive machine learning (ML) applications. Till date, several SRAM/ReRAM-based IMC hardware architectures to accelerate ML applications have been proposed in the literature. However, crossbar-based IMC hardware poses several design challenges. In this chapter, we first describe different machine learning algorithms adopted in the literature recently. Then, we elucidate the need for IMC-based hardware accelerators and various IMC techniques for compute-intensive ML applications. Next, we discuss the challenges associated with IMC architectures. We identify that designing an energy-efficient interconnect is extremely challenging for IMC hardware. Thereafter, we discuss different interconnect techniques for IMC architectures proposed in the literature. Finally, different performance evaluation techniques for IMC architectures are described. We conclude the chapter with a summary and future avenues for IMC architectures for ML acceleration.

Original languageEnglish (US)
Title of host publicationEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Subtitle of host publicationHardware Architectures
PublisherSpringer International Publishing
Pages199-224
Number of pages26
ISBN (Electronic)9783031195686
ISBN (Print)9783031195679
DOIs
StatePublished - Jan 1 2023

ASJC Scopus subject areas

  • General Computer Science
  • General Engineering
  • General Social Sciences

Fingerprint

Dive into the research topics of 'In-Memory Computing for AI Accelerators: Challenges and Solutions'. Together they form a unique fingerprint.

Cite this