Brain metastases are the most common malignant form of tumors and occur in 10%-30% of adult patients with systematic cancer. With recent advances in treatment options, there is an increasing evidence that automated detection and segmentation from MRI can assist clinicians for diagnosis and therapy planning. In this study, we investigate the impact of data domain on self-supervised learning (SSL) for pretraining a deep learning network to detect and segment brain metastases on 3D post-contrast T1-weighted images. We performed pretraining a 3D patch-based U-Net using the Model Genesis framework on three subject cohorts that have different data domain. The pretrained networks were then finetuned on brain MR scans from patients with metastases as a downstream task dataset. We analyzed the impact of data domain on SSL by examining validation metric evolution, FROC analyses and testing performance of early-trained models and best-validated models. Our results suggested that, in the early stage of finetuning for the target task, SSL is crucial for faster training convergence and similar data domain on SSL could be helpful to attain improved detection and segmentation performance earlier. However, we observed that the importance of data domain similarity for SSL progressively diminished as training continued with sufficient amount of iterations in our relatively large data regime. After training convergence, the best-validated models pretrained with SSL provided enhanced detection performance over the model without pretraining regardless of data domain.
Recent technological advances in deep learning (DL) have led to more accurate brain metastasis (BM) detection. As a data driven approach, DL’s performance highly relies on the size and quality of the training data. However, collecting large amount of medical data is costly, and it’s difficult to include BMs with various locations, sizes, and structures etc. Thus, we propose a 3D-2D GAN for fully 3D BM synthesis with configurable parameters. First, two 3D networks are used to synthesize the mask and quantized intensity map of a lesion from 3 concentric spheres, which are used to control the lesion’s location, size and structure. Then, a 2D network is used to synthesize the final lesion with proper appearance from the quantized intensity map and the background MR image. With this 3D-2D design, the 3D networks enable the synthetic metastasis to be spatially continuous in all 3 dimensions through the guidance of the 3D intermediate presentation of the lesion, while the 2D network enables the use of 2D perceptual loss to make the final synthesized lesion look realistic. In addition, different network up-sampling strategies and postprocessing are used to control the heterogeneity and contrast of the synthetic lesion. All the synthesized images were reviewed by a radiologist. The indistinguishability rate of the synthesized lesion is above 70%. The configurable parameters for the lesion’s location, size, and structure, heterogeneity and contrast were reviewed to be effective. Our work demonstrates the feasibility of synthesizing configurable 3D BM lesions for fully 3D data augmentation.
To create tumor “habitats” from the “signatures” discovered from multimodality metabolic and physiological images, we developed a framework of a processing pipeline. The processing pipeline consists of six major steps: (1) creating superpixels as a spatial unit in a tumor volume; (2) forming a data matrix {D} containing all multimodality image parameters at superpixels; (3) forming and clustering a covariance or correlation matrix {C} of the image parameters to discover major image “signatures;” (4) clustering the superpixels and organizing the parameter order of the {D} matrix according to the one found in step 3; (5) creating “habitats” in the image space from the superpixels associated with the “signatures;” and (6) pooling and clustering a matrix consisting of correlation coefficients of each pair of image parameters from all patients to discover subgroup patterns of the tumors. The pipeline was applied to a dataset of multimodality images in glioblastoma (GBM) first, which consisted of 10 image parameters. Three major image “signatures” were identified. The three major “habitats” plus their overlaps were created. To test generalizability of the processing pipeline, a second image dataset from GBM, acquired on the scanners different from the first one, was processed. Also, to demonstrate the clinical association of image-defined “signatures” and “habitats,” the patterns of recurrence of the patients were analyzed together with image parameters acquired prechemoradiation therapy. An association of the recurrence patterns with image-defined “signatures” and “habitats” was revealed. These image-defined “signatures” and “habitats” can be used to guide stereotactic tissue biopsy for genetic and mutation status analysis and to analyze for prediction of treatment outcomes, e.g., patterns of failure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.