Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining

Zuwei Guo, Nahid Ui Islam, Michael B. Gotway, Jianming Liang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Scopus citations

Abstract

Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik’s Cube, Deep Clustering, and TransVW, and formulated each in a United framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

Original languageEnglish (US)
Title of host publicationDomain Adaptation and Representation Transfer - 4th MICCAI Workshop, DART 2022, Held in Conjunction with MICCAI 2022, Proceedings
EditorsKonstantinos Kamnitsas, Lisa Koch, Mobarakol Islam, Ziyue Xu, Jorge Cardoso, Qi Dou, Nicola Rieke, Sotirios Tsaftaris
PublisherSpringer Science and Business Media Deutschland GmbH
Pages66-76
Number of pages11
ISBN (Print)9783031168512
DOIs
StatePublished - 2022
Event4th MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2022, held in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 - Singapore, Singapore
Duration: Sep 22 2022Sep 22 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13542 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2022, held in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
Country/TerritorySingapore
CitySingapore
Period9/22/229/22/22

Keywords

  • Adversarial learning
  • Discriminative learning
  • Restorative learning
  • Self-supervised learning
  • Stepwise pretraining
  • United framework

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining'. Together they form a unique fingerprint.

Cite this