Learning semantics-enriched representation via self-discovery, self-classification, and self-restoration

Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Zongwei Zhou, Michael B. Gotway, Jianming Liang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

44 Scopus citations


Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (i.e., CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the de facto ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis.

Original languageEnglish (US)
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2020 - 23rd International Conference, Proceedings
EditorsAnne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, Leo Joskowicz
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages11
ISBN (Print)9783030597092
StatePublished - 2020
Event23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020 - Lima, Peru
Duration: Oct 4 2020Oct 8 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12261 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference23rd International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2020


  • 3D model pre-training
  • Self-supervised learning
  • Transfer learning

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'Learning semantics-enriched representation via self-discovery, self-classification, and self-restoration'. Together they form a unique fingerprint.

Cite this