Abstract
In-memory computing with analog nonvolatile memories can accelerate the in situ training of deep neural networks. Recently, we proposed a synaptic cell of a ferroelectric transistor (FeFET) with two CMOS transistors (2T1F) that exploit the hybrid precision for training and inference, which overcomes the challenges of nonlinear and asymmetric weight update and achieves nearly software comparable training accuracy at the algorithm level. In this paper, we further present the circuit-level benchmark results of this hybrid precision synapse in terms of area, latency, and energy. The corresponding array architecture is presented and the array-level operations are illustrated. The benchmark is conducted by multilayer-perceptron (MLP) + NeuroSim framework with comparison to other capacitor-assisted (e.g., 3T1C + 2PCM) hybrid precision cell. The design tradeoffs and scalability are discussed between different implementations.
Original language | English (US) |
---|---|
Article number | 8746639 |
Pages (from-to) | 142-150 |
Number of pages | 9 |
Journal | IEEE Journal on Exploratory Solid-State Computational Devices and Circuits |
Volume | 5 |
Issue number | 2 |
DOIs | |
State | Published - Dec 2019 |
Externally published | Yes |
Keywords
- Benchmark
- ferroelectric transistor (FeFET)
- in-memory computing
- neural network
- synaptic device
ASJC Scopus subject areas
- Electronic, Optical and Magnetic Materials
- Hardware and Architecture
- Electrical and Electronic Engineering