Early detection of lung cancer is essential in reducing mortality. Recent studies have demonstrated the clinical utility of low-dose computed tomography (CT) to detect lung cancer among individuals selected based on very limited clinical information. However, this strategy yields high false positive rates, which can lead to unnecessary and potentially harmful procedures. To address such challenges, we established a pipeline that co-learns from detailed clinical demographics and 3D CT images. Toward this end, we leveraged data from the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), which focuses on early detection of lung cancer. A 3D attention-based deep convolutional neural net (DCNN) is proposed to identify lung cancer from the chest CT scan without prior anatomical location of the suspicious nodule. To improve upon the non-invasive discrimination between benign and malignant, we applied a random forest classifier to a dataset integrating clinical information to imaging data. The results show that the AUC obtained from clinical demographics alone was 0.635 while the attention network alone reached an accuracy of 0.687. In contrast when applying our proposed pipeline integrating clinical and imaging variables, we reached an AUC of 0.787 on the testing dataset. The proposed network both efficiently captures anatomical information for classification and also generates attention maps that explain the features that drive performance.
Splenomegaly segmentation on computed tomography (CT) abdomen anatomical scans is essential for identifying spleen biomarkers and has applications for quantitative assessment in patients with liver and spleen disease. Deep convolutional neural network automated segmentation has shown promising performance for splenomegaly segmentation. However, manual labeling of abdominal structures is resource intensive, so the labeled abdominal imaging data are rare resources despite their essential role in algorithm training. Hence, the number of annotated labels (e.g., spleen only) are typically limited with a single study. However, with the development of data sharing techniques, more and more publicly available labeled cohorts are available from different resources. A key new challenging is to co-learn from the multi-source data, even with different numbers of labeled abdominal organs in each study. Thus, it is appealing to design a co-learning strategy to train a deep network from heterogeneously labeled scans. In this paper, we propose a new deep convolutional neural network (DCNN) based method that integrates heterogeneous multi-resource labeled cohorts for splenomegaly segmentation. To enable the proposed approach, a novel loss function is introduced based on the Dice similarity coefficient to adaptively learn multi-organ information from different resources. Three cohorts were employed in our experiments, the first cohort (98 CT scans) has only splenomegaly labels, while the second training cohort (100 CT scans) has 15 distinct anatomical labels with normal spleens. A separate, independent cohort consisting of 19 splenomegaly CT scans with labeled spleen was used as testing cohort. The proposed method achieved the highest median Dice similarity coefficient value (0.94), which is superior (p-value<0.01 against each other method) to the baselines of multi-atlas segmentation (0.86), SS-Net segmentation with only spleen labels (0.90) and U-Net segmentation with multi-organ training (0.91). Our approach for adapting the loss function and training structure is not specific to the abdominal context and may be beneficial in other situations where datasets with varied label sets are available.
Whole brain segmentation on structural magnetic resonance imaging (MRI) is essential for understanding neuroanatomical-functional relationships. Traditionally, multi-atlas segmentation has been regarded as the standard method for whole brain segmentation. In past few years, deep convolutional neural network (DCNN) segmentation methods have demonstrated their advantages in both accuracy and computational efficiency. Recently, we proposed the spatially localized atlas network tiles (SLANT) method, which is able to segment a 3D MRI brain scan into 132 anatomical regions. Commonly, DCNN segmentation methods yield inferior performance under external validations, especially when the testing patterns were not presented in the training cohorts. Recently, we obtained a clinically acquired, multi-sequence MRI brain cohort with 1480 clinically acquired, de-identified brain MRI scans on 395 patients using seven different MRI protocols. Moreover, each subject has at least two scans from different MRI protocols. Herein, we assess the SLANT method’s intra- and inter-protocol reproducibility. SLANT achieved less than 0.05 coefficient of variation (CV) for intra-protocol experiments and less than 0.15 CV for inter-protocol experiments. The results show that the SLANT method achieved high intra- and inter- protocol reproducibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.