Quality Robust Mixtures of Deep Neural Networks

Samuel F. Dodge, Lina Karam

Research output: Contribution to journalArticlepeer-review

27 Scopus citations


We study deep neural networks for classification of images with quality distortions. Deep network performance on poor quality images can be greatly improved if the network is fine-tuned with distorted data. However, it is difficult for a single fine-tuned network to perform well across multiple distortion types. We propose a mixture of experts-based ensemble method, MixQualNet, that is robust to multiple different types of distortions. The 'experts' in our model are trained on a particular type of distortion. The output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The gating network is trained to predict weights for a particular distortion type and level. During testing, the network is blind to the distortion level and type, yet can still assign appropriate weights to the expert models. In order to reduce the computational complexity, we introduce weight sharing into the MixQualNet. We utilize the TreeNet weight sharing architecture as well as introduce the Inverted TreeNet architecture. While both weight sharing architectures reduce memory requirements, our proposed Inverted TreeNet also achieves improved accuracy.

Original languageEnglish (US)
Article number8410945
Pages (from-to)5553-5562
Number of pages10
JournalIEEE Transactions on Image Processing
Issue number11
StatePublished - Nov 2018


  • Saliency
  • convolutional neural networks
  • deep learning
  • human eye fixations

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Quality Robust Mixtures of Deep Neural Networks'. Together they form a unique fingerprint.

Cite this