Certain body composition phenotypes, like sarcopenia, are well established as predictive markers for post-surgery complications and overall survival of lung cancer patients. However, their association with incidental lung cancer risk in the screening population is still unclear. We study the feasibility of body composition analysis using chest low dose computed tomography (LDCT). A two-stage fully automatic pipeline is developed to assess the cross-sectional area of body composition components including subcutaneous adipose tissue (SAT), muscle, visceral adipose tissue (VAT), and bone on T5, T8 and T10 vertebral levels. The pipeline is developed using 61 cases of the VerSe`20 dataset, 40 annotated cases of NLST, and 851 inhouse screening cases. On a test cohort consisting of 30 cases from the inhouse screening cohort (age 55 - 73, 50% female) and 42 cases of NLST (age 55 - 75, 59.5% female), the pipeline achieves a root mean square error (RMSE) of 7.25 mm (95% CI: [6.61, 7.85]) for the vertebral level identification and mean Dice similarity score (DSC) 0.99 ± 0.02, 0.96 ± 0.03, and 0.95 ± 0.04 for SAT, muscle, and VAT, respectively for body composition segmentation. The pipeline is generalized to the CT arm of the NLST dataset (25,205 subjects, 40.8% female, 1,056 lung cancer incidences). Time-to-event analysis for lung cancer incidence indicates inverse association between measured muscle cross-sectional area and incidental lung cancer risks (p < 0.001 female, p < 0.001 male). In conclusion, automatic body composition analysis using routine lung screening LDCT is feasible.
Clinical data elements (CDEs) (e.g., age, smoking history), blood markers and chest computed tomography (CT) structural features have been regarded as effective means for assessing lung cancer risk. These independent variables can provide complementary information and we hypothesize that combining them will improve the prediction accuracy. In practice, not all patients have all these variables available. In this paper, we propose a new network design, termed as multi-path multi-modal missing network (M3Net), to integrate the multi-modal data (i.e., CDEs, biomarker and CT image) considering missing modality with multiple paths neural network. Each path learns discriminative features of one modality, and different modalities are fused in a second stage for an integrated prediction. The network can be trained end-to-end with both medical image features and CDEs/biomarkers, or make a prediction with single modality. We evaluate M3Net with datasets including three sites from the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions (MCL) project. Our method is cross validated within a cohort of 1291 subjects (383 subjects with complete CDEs/biomarkers and CT images), and externally validated with a cohort of 99 subjects (99 with complete CDEs/biomarkers and CT images). Both cross-validation and external-validation results show that combining multiple modality significantly improves the predicting performance of single modality. The results suggest that integrating subjects with missing either CDEs/biomarker or CT imaging features can contribute to the discriminatory power of our model (p < 0.05, bootstrap two-tailed test). In summary, the proposed M3Net framework provides an effective way to integrate image and non-image data in the context of missing information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.