TY - GEN
T1 - Parts2Whole
T2 - 2nd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2020, and the 1st MICCAI Workshop on Distributed and Collaborative Learning, DCL 2020, held in conjunction with the Medical Image Computing and Computer Assisted Intervention, MICCAI 2020
AU - Feng, Ruibin
AU - Zhou, Zongwei
AU - Gotway, Michael B.
AU - Liang, Jianming
N1 - Funding Information:
This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided partially by the ASU Research Computing and partially by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant number ACI-1548562. We would like to thank Jiaxuan Pang, Md Mahfuzur Rahman Siddiquee, and Zuwei Guo for evaluating I3D, NiftyNet, and MedicalNet, respectively. The content of this paper is covered by patents pending.
Funding Information:
Acknowledgments. This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by the NIH under Award Number R01HL128785. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work has utilized the GPUs provided partially by the ASU Research Computing and partially by the Extreme Science and Engineering Discovery Environment (XSEDE) funded by the National Science Foundation (NSF) under grant number ACI-1548562. We would like to thank Jiaxuan Pang, Md Mahfuzur Rahman Siddiquee, and Zuwei Guo for evaluating I3D, NiftyNet, and MedicalNet, respectively. The content of this paper is covered by patents pending.
Publisher Copyright:
© 2020, Springer Nature Switzerland AG.
PY - 2020
Y1 - 2020
N2 - Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the universal and intrinsic part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own partsin the latent space, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.
AB - Contrastive representation learning is the state of the art in computer vision, but requires huge mini-batch sizes, special network design, or memory banks, making it unappealing for 3D medical imaging, while in 3D medical imaging, reconstruction-based self-supervised learning reaches a new height in performance, but lacks mechanisms to learn contrastive representation; therefore, this paper proposes a new framework for self-supervised contrastive learning via reconstruction, called Parts2Whole, because it exploits the universal and intrinsic part-whole relationship to learn contrastive representation without using contrastive loss: Reconstructing an image (whole) from its own parts compels the model to learn similar latent features for all its own partsin the latent space, while reconstructing different images (wholes) from their respective parts forces the model to simultaneously push those parts belonging to different wholes farther apart from each other in the latent space; thereby the trained model is capable of distinguishing images. We have evaluated our Parts2Whole on five distinct imaging tasks covering both classification and segmentation, and compared it with four competing publicly available 3D pretrained models, showing that Parts2Whole significantly outperforms in two out of five tasks while achieves competitive performance on the rest three. This superior performance is attributable to the contrastive representations learned with Parts2Whole. Codes and pretrained models are available at github.com/JLiangLab/Parts2Whole.
KW - 3D Self-supervised Learning
KW - Contrastive representation learning
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85092146742&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092146742&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-60548-3_9
DO - 10.1007/978-3-030-60548-3_9
M3 - Conference contribution
AN - SCOPUS:85092146742
SN - 9783030605476
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 85
EP - 95
BT - Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning - 2nd MICCAI Workshop, DART 2020, and 1st MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Proceedings
A2 - Albarqouni, Shadi
A2 - Bakas, Spyridon
A2 - Kamnitsas, Konstantinos
A2 - Cardoso, M. Jorge
A2 - Landman, Bennett
A2 - Li, Wenqi
A2 - Milletari, Fausto
A2 - Rieke, Nicola
A2 - Roth, Holger
A2 - Xu, Daguang
A2 - Xu, Ziyue
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 4 October 2020 through 8 October 2020
ER -