Pediatric diffuse midline glioma (DMG) is a rare but fatal pediatric brain tumor. Tumor MRI features, extracted from segmented DMG, have shown promise for predicting DMG progression and overall survival. The data and knowledge accumulated from the more common adult brain tumors cannot be directly applied to DMG due to different tumor locations and appearances. The purpose of this work is to develop a transfer learning-based approach to automatically preprocess and segment sub-regions of DMG from multisequence MRIs. We retrospectively collected T1, contrastenhanced T1, T2 and T2 FLAIR images of 45 children diagnosed with DMG. MRI images at two timepoints were considered: at diagnosis and after completion of radiation therapy (RT). This generated a DMG dataset of 82 cases. Manual segmentation of two labels were created: the enhancing region (ER) and the whole tumor (WT). We modified the SegResNet model developed by NVIDIA and pre-trained it on BraTS 2021 challenge dataset, which contains 1,251 subjects with adult glioblastoma multiforme. DMG data was automatically preprocessed to have the same resolution and format as the input data in the BraTS challenge. A 5-fold cross-validation was performed using the preprocessed DMG data to finetune and validate the model. The proposed method resulted in mean Dice scores of 0.831 and 0.840 for the ER and WT segmentations, respectively. The method produced decent segmentation results for a small dataset. We demonstrated transfer learning from adult brain tumors to rare pediatric brain tumors was feasible and would improve segmentation results.
Automated tissue characterization is one of the major applications of computer-aided diagnosis systems. Deep learning techniques have recently demonstrated impressive performance for the image patch-based tissue characterization. However, existing patch-based tissue classification techniques struggle to exploit the useful shape information. Local and global shape knowledge such as the regional boundary changes, diameter, and volumetrics can be useful in classifying the tissues especially in scenarios where the appearance signature does not provide significant classification information. In this work, we present a deep neural network-based method for the automated segmentation of the tumors referred to as optic pathway gliomas (OPG) located within the anterior visual pathway (AVP; optic nerve, chiasm or tracts) using joint shape and appearance learning. Voxel intensity values of commonly used MRI sequences are generally not indicative of OPG. To be considered an OPG, current clinical practice dictates that some portion of AVP must demonstrate shape enlargement. The method proposed in this work integrates multiple sequence magnetic resonance image (T1, T2, and FLAIR) along with local boundary changes to train a deep neural network. For training and evaluation purposes, we used a dataset of multiple sequence MRI obtained from 20 subjects (10 controls, 10 NF1+OPG). To our best knowledge, this is the first deep representation learning-based approach designed to merge shape and multi-channel appearance data for the glioma detection. In our experiments, mean misclassification errors of 2:39% and 0:48% were observed respectively for glioma and control patches extracted from the AVP. Moreover, an overall dice similarity coefficient of 0:87±0:13 (0:93±0:06 for healthy tissue, 0:78±0:18 for glioma tissue) demonstrates the potential of the proposed method in the accurate localization and early detection of OPG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.