KEYWORDS: Data modeling, Magnetic resonance imaging, Performance modeling, Education and training, Data privacy, Cross validation, Feature extraction, Tumors, Radiomics, Mixtures
Deep learning models have shown potential in medical image analysis tasks. However, training a generalized deep learning model requires huge amounts of patient data that is usually gathered from multiple institutions which may raise privacy concerns. Federated learning (FL) provides an alternative to sharing data across institutions. Nonetheless, FL is susceptible to a few challenges including inversion attacks on model weights, heterogenous data distributions, and bias. This study addresses heterogeneity and bias issues for multi-institution patient data by proposing domain adaptive FL modeling using several radiomics (volume, fractal, texture) features for O6-methylguanine-DNA methyltransferase (MGMT) classification across multiple institutions. The proposed domain adaptive FL MGMT classification inherently offers differential privacy (DP) for the patient data. For domain adaptation two techniques e.g., mixture of experts (ME) with a gating network and adversarial alignment are used for comparison. The proposed method is evaluated using publicly available multi-institution (UPENN-GBM, UCSF-PDGM, RSNA-ASNR-MICCAI BraTS-2021) data set with a total of 1007 patients. Our experiments with 5-fold cross validation suggest that domain adaptive FL offers improved performance with a mean accuracy of 69.93% ± 4.8 % and area under curve of 0.655 ± 0.055 across multiple institutions. In addition, further analysis of probability density of gating network for domain adaptive FL identifies the institution that may bias the global model prediction due to increased heterogeneity for a given input. Our comparison analysis shows that the proposed method with bias identification offers the best predictive performance when compared to different commonly employed FL and baseline methods in the literature.
According to the Centers for Disease Control and Prevention (CDC) more than 932,000 people in the US have died since 1999 from a drug overdose. Just about 75% of drug overdose deaths in 2020 involved Opioid, which suggests that the US is in an Opioid overdose epidemic. Identifying individuals likely to develop Opioid use disorder (OUD) can help public health in planning effective prevention, intervention, drug overdose and recovery policies. Further, a better understanding of prediction of overdose leading to the neurobiology of OUD may lead to new therapeutics. In recent years, very limited work has been done using statistical analysis of functional magnetic resonance imaging (fMRI) methods to analyze the neurobiology of Opioid addictions in humans. In this work, for the first time in the literature, we propose a machine learning (ML) framework to predict OUD users utilizing clinical fMRI-BOLD (Blood oxygen level dependent) signal from OUD users and healthy controls (HC). We first obtain the features and validate these with those extracted from selected brain subcortical areas identified in our previous statistical analysis of the fMRI-BOLD signal discriminating OUD subjects from that of the HC. The selected features from three representative brain areas such as default mode network (DMN), salience network (SN), and executive control network (ECN) for both OUD participants and HC subjects are then processed for OUD and HC subjects’ prediction. Our leave one out cross validated results with sixty-nine OUD and HC cases show 88.40% prediction accuracies. These results suggest that the proposed techniques may be utilized to gain a greater understanding of the neurobiology of OUD leading to novel therapeutic development.
KEYWORDS: Education and training, Image segmentation, Liver, Data modeling, Deep learning, Cross validation, Kidney, Visualization, Computed tomography, Visual process modeling
Deep learning (DL)-based medical imaging and image segmentation algorithms achieve impressive performance on many benchmarks. Yet the efficacy of deep learning methods for future clinical applications may become questionable due to the lack of ability to reason with uncertainty and interpret probable areas of failures in prediction decisions. Therefore, it is desired that such a deep learning model for segmentation classification is able to reliably predict its confidence measure and map back to the original imaging cases to interpret the prediction decisions. In this work, uncertainty estimation for multiorgan segmentation task is evaluated to interpret the predictive modeling in DL solutions. We use the state-of-the-art nnU-Net to perform segmentation of 15 abdominal organs (spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, prostate/uterus) using 200 patient cases for the Multimodality Abdominal Multi-Organ Segmentation Challenge 2022. Further, the softmax probabilities from different variants of nnU-Net are used to compute the knowledge uncertainty in the deep learning framework. Knowledge uncertainty from ensemble of DL models is utilized to quantify and visualize class activation map for two example segmented organs. The preliminary result of our model shows that class activation maps may be used to interpret the prediction decision made by the DL model used in this study.
Despite multimodal aggressive treatment with chemo-radiation-therapy, and surgical resection, Glioblastoma Multiforme (GBM) may recur which is known as recurrent brain tumor (rBT), There are several instances where benign and malignant pathologies might appear very similar on radiographic imaging. One such illustration is radiation necrosis (RN) (a moderately benign impact of radiation treatment) which are visually almost indistinguishable from rBT on structural magnetic resonance imaging (MRI). There is hence a need for identification of reliable non-invasive quantitative measurements on routinely acquired brain MRI scans: pre-contrast T1-weighted (T1), post-contrast T1-weighted (T1Gd), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (FLAIR) that can accurately distinguish rBT from RN. In this work, sophisticated radiomic texture features are used to distinguish rBT from RN on multimodal MRI for disease characterization. First, stochastic multiresolution radiomic descriptor that captures voxel-level textural and structural heterogeneity as well as intensity and histogram features are extracted. Subsequently, these features are used in a machine learning setting to characterize the rBT from RN from four sequences of the MRI with 155 imaging slices for 30 GBM cases (12 RN, 18 rBT). To reduce the bias in accuracy estimation our model is implemented using Leave-one-out crossvalidation (LOOCV) and stratified 5-fold cross-validation with a Random Forest classifier. Our model offers mean accuracy of 0.967 ± 0.180 for LOOCV and 0.933 ± 0.082 for stratified 5-fold cross-validation using multiresolution texture features for discrimination of rBT from RN in this study. Our findings suggest that sophisticated texture feature may offer better discrimination between rBT and RN in MRI compared to other works in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.