Presentation + Paper
2 April 2024 Design, training, and applications of foundation model for chest computed tomography volumes
Author Affiliations +
Abstract
Self-supervised pretraining can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the imaging data. We developed a foundation model for chest computed tomography exams using selfsupervised training strategy of masked image region prediction on 1M chest CT slices. The model was evaluated for two downstream tasks; pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Use of the foundation model as a backbone improved performance and reduced computational effort needed for downstream tasks compared to task-specific state-of-the-art (SOTA) models. PE detection was improved for training dataset sizes as large as 380K with maximum gain of 5% over SOTA. Segmentation model initialized with foundation model weights learned twice as fast as randomly initialized model. This model can then be finetuned with limited task-specific annotated data for a variety of downstream imaging tasks thus accelerating research in biomedical imaging informatics.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Amara Tariq, Bhavik N. Patel, and Imon Banerjee "Design, training, and applications of foundation model for chest computed tomography volumes", Proc. SPIE 12926, Medical Imaging 2024: Image Processing, 1292611 (2 April 2024); https://doi.org/10.1117/12.3003042
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Data modeling

Education and training

Chest

Lung

Computed tomography

Visual process modeling

Image segmentation

Back to Top