Neuro-inspired computing has made significant progress in recent years. However, its computation efficiency and hardware cost still lag behind the biological nervous system, especially during the training stage. This work targets to understand this gap from a neural motif perspective, particularly the feedforward inhibitory motif. Such a motif has been found in many cortical systems, presenting a vital role in sparse learning. This work first establishes a neural network model that emulates the insect's olfactory system, and then systematically studies various effects of the feedforward inhibitory motif. The performance and efficiency of the neural network models are evaluated through the handwritten digits recognition task, with and without the feedforward inhibitory motif. As demonstrated in the results, the utilization of the feedforward inhibitory motif is able to reduce the network size by > 3X at the same accuracy of 95% in handwritten digits recognition. Further simulation experiments reveal that the feedforward inhibition not only dynamically regulates the firing rate of excitatory neurons, promotes and stabilizes the sparsity, but also provides a coarse categorization of the inputs, which improves the final accuracy with a smaller, cascade structure. These results differentiate the feedforward inhibition path from previous understanding of the feedback inhibition, illustrating its functional importance for high computation and structure efficiency.

Original languageEnglish (US)
Pages (from-to)141-151
Number of pages11
StatePublished - Dec 6 2017


  • Feedforward inhibition
  • Handwritten recognition
  • Hebbian learning
  • Neural motif
  • Sparse learning
  • Spiking neural network

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'Improving efficiency in sparse learning with the feedforward inhibitory motif'. Together they form a unique fingerprint.

Cite this