Chameleon: A Generalized Reconfigurable Open-Source Architecture for Deep Neural Network Training

Mihailo Isakov, Alan Ehret, Michel Kinsy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present Chameleon, an architecture for neural network training and inference design exploration on FPGAs. While there exists a great number of different network types and optimizations, it is not always clear how these differences impact the hardware implementation of neural networks. Chameleon is created with extensibility and experimentation in mind, supporting a number of activations, neuron types, and signal bitwidths. Furthermore, Chameleon is created to be modular, allowing designers to easily swap existing parts for improved ones, speeding up research iteration. While there exists a large number of inference architectures, we focus on speeding up training, as training time is the bottleneck for neural network architecture exploration. Chameleon therefore aims to help researchers better understand the bottlenecks in training deep neural networks and create models that circumvent these barriers. Finally, Chameleon is designed to be simple, without requiring a compiler or reconfiguration to function. This allows quick localized changes to the architecture and facilitates design exploration. We present synthesis results on an Altera Cyclone V SoC and show the design resource usage. We finish with an evaluation by training a network on the Wisconsin Breast Cancer dataset. The RTL and synthesis files for the architecture will be open-sourced upon publication at http://ascslab.org/research/abc/chameleon/index.html.

Original languageEnglish (US)
Title of host publication2018 IEEE High Performance Extreme Computing Conference, HPEC 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781538659892
DOIs
StatePublished - Nov 26 2018
Externally publishedYes
Event2018 IEEE High Performance Extreme Computing Conference, HPEC 2018 - Waltham, United States
Duration: Sep 25 2018Sep 27 2018

Publication series

Name2018 IEEE High Performance Extreme Computing Conference, HPEC 2018

Conference

Conference2018 IEEE High Performance Extreme Computing Conference, HPEC 2018
Country/TerritoryUnited States
CityWaltham
Period9/25/189/27/18

Keywords

  • Architecture
  • Design exploration
  • FPGA
  • Hardware
  • Neural networks
  • Open-source
  • RTL
  • Training

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Chameleon: A Generalized Reconfigurable Open-Source Architecture for Deep Neural Network Training'. Together they form a unique fingerprint.

Cite this