Comprehensive evaluation of openCL-based CNN implementations for FPGAs

Ricardo Tapiador-Morales, Antonio Rios-Navarro, Alejandro Linares-Barranco, Minkyu Kim, Deepak Kadetotad, Jae-sun Seo

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations


Deep learning has significantly advanced the state of the art in artificial intelligence, gaining wide popularity from both industry and academia. Special interest is around Convolutional Neural Networks (CNN), which take inspiration from the hierarchical structure of the visual cortex, to form deep layers of convolutional operations, along with fully connected classifiers. Hardware implementations of these deep CNN architectures are challenged with memory bottlenecks that require many convolution and fully-connected layers demanding large amount of communication for parallel computation. Multi-core CPU based solutions have demonstrated their inadequacy for this problem due to the memory wall and low parallelism. Many-core GPU architectures show superior performance but they consume high power and also have memory constraints due to inconsistencies between cache and main memory. OpenCL is commonly used to describe these architectures for their execution on GPGPUs or FPGAs. FPGA design solutions are also actively being explored, which allow implementing the memory hierarchy using embedded parallel BlockRAMs. This boosts the parallel use of shared memory elements between multiple processing units, avoiding data replicability and inconsistencies. This makes FPGAs potentially powerful solutions for real-time classification of CNNs. In this paper both Altera and Xilinx adopted OpenCL co-design frameworks for pseudo-automatic development solutions are evaluated. A comprehensive evaluation and comparison for a 5-layer deep CNN is presented. Hardware resources, temporal performance and the OpenCL architecture for CNNs are discussed. Xilinx demonstrates faster synthesis, better FPGA resource utilization and more compact boards. Altera provides multi-platforms tools, mature design community and better execution times.

Original languageEnglish (US)
Title of host publicationAdvances in Computational Intelligence - 14th International Work-Conference on Artificial Neural Networks, IWANN 2017, Proceedings
EditorsIgnacio Rojas, Andreu Catala, Gonzalo Joya
PublisherSpringer Verlag
Number of pages12
ISBN (Print)9783319591469
StatePublished - 2017
Event16th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2017 - Zakopane, Poland
Duration: Jun 11 2017Jun 15 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10306 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Other16th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2017


  • Altera
  • Caffe
  • Convolutional Neural Network
  • Deep learning
  • FPGA
  • Hardware acceleration
  • OpenCL
  • Xilinx

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Comprehensive evaluation of openCL-based CNN implementations for FPGAs'. Together they form a unique fingerprint.

Cite this