Unconstrained ear recognition using deep neural networks

Samuel Dodge, Jinane Mounsef, Lina Karam

Research output: Contribution to journalArticlepeer-review

65 Scopus citations


The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.

Original languageEnglish (US)
Pages (from-to)207-214
Number of pages8
JournalIET Biometrics
Issue number3
StatePublished - May 1 2018

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'Unconstrained ear recognition using deep neural networks'. Together they form a unique fingerprint.

Cite this