Medical images of a patient may have a significantly different appearance depending on imaging modality (e.g. MRI vs. CT), sequence type (e.g., T1-weighted MRI vs. T2-weighted MRI), and even manufacturer/model of equipment used for the same modality and sequence type (e.g. SIEMENS vs GE). Since in the context of deep learning training and test data often come from different institutions, it is important to determine how well neural networks generalize when image appearance varies. There is currently no systematic answer to this question. In this study, we investigate how deep neural networks trained for segmentation generalize. Our analysis is based on synthesizing a series of datasets of images with the target object of the same shape but with varying pixel intensity of the foreground object and the background. This simulates basic effects of changing equipment models and sequence types. We also consider scenarios when datasets with different image properties are combined to determine whether generalizability of the network to other scenarios is improved. We found that the generalizability of segmentation networks to changing intensities is poor. We also found that the generalizability is somewhat improved when different datasets are combined but that generalizability is typically limited to data similar to the two types of datasets included in training and not to datasets with different image intensities.
Radiogenomics is a field of investigation that attempts to examine the relationship between imaging characteris- tics of cancerous lesions and their genomic composition. This could offer a noninvasive alternative to establishing genomic characteristics of tumors and aid cancer treatment planning. While deep learning has shown its supe- riority in many detection and classification tasks, breast cancer radiogenomic data suffers from a very limited number of training examples, which renders the training of the neural network for this problem directly and with no pretraining a very difficult task. In this study, we investigated an alternative deep learning approach referred to as deep features or off-the-shelf network approach to classify breast cancer molecular subtypes using breast dynamic contrast enhanced MRIs. We used the feature maps of different convolution layers and fully connected layers as features and trained support vector machines using these features for prediction. For the feature maps that have multiple layers, max-pooling was performed along each channel. We focused on distinguishing the Luminal A subtype from other subtypes. To evaluate the models, 10 fold cross-validation was performed and the final AUC was obtained by averaging the performance of all the folds. The highest average AUC obtained was 0.64 (0.95 CI: 0.57-0.71), using the feature maps of the last fully connected layer. This indicates the promise of using this approach to predict the breast cancer molecular subtypes. Since the best performance appears in the last fully connected layer, it also implies that breast cancer molecular subtypes may relate to high level image features
KEYWORDS: Mammography, Breast, Data modeling, Digital breast tomosynthesis, Performance modeling, Breast cancer, Feature extraction, Tumor growth modeling, Tumors, Network architectures
Breast mass detection in mammography and digital breast tomosynthesis (DBT) is an essential step in computerized breast cancer analysis. Deep learning-based methods incorporate feature extraction and model learning into a unified framework and have achieved impressive performance in various medical applications (e.g., disease diagnosis, tumor detection, and landmark detection). However, these methods require large-scale accurately annotated data. Unfortunately, it is challenging to get precise annotations of breast masses. To address this issue, we propose a fully convolutional network (FCN) based heatmap regression method for breast mass detection, using only weakly annotated mass regions in mammography images. Specifically, we first generate heat maps of masses based on human-annotated rough regions for breast masses. We then develop an FCN model for end-to-end heatmap regression with an F-score loss function, where the mammography images are regarded as the input and heatmaps for breast masses are used as the output. Finally, the probability map of mass locations can be estimated with the trained model. Experimental results on a mammography dataset with 439 subjects demonstrate the effectiveness of our method. Furthermore, we evaluate whether we can use mammography data to improve detection models for DBT, since mammography shares similar structure with tomosynthesis. We propose a transfer learning strategy by fine-tuning the learned FCN model from mammography images. We test this approach on a small tomosynthesis dataset with only 40 subjects, and we show an improvement in the detection performance as compared to training the model from scratch.
Digital breast tomosynthesis (DBT) is a relatively new modality for breast imaging that can provide detailed assessment of dense tissue within the breast. In the domains of cancer diagnosis, radiogenomics, and resident education, it is important to accurately segment breast masses. However, breast mass segmentation is a very challenging task, since mass regions have low contrast difference between their neighboring tissues. Notably, the task might become more difficult in cases that were assigned BI-RADS 0 category since this category includes many lesions that are of low conspicuity and locations that were deemed to be overlapping normal tissue upon further imaging and were not sent to biopsy. Segmentation of such lesions is of particular importance in the domain of reader performance analysis and education. In this paper, we propose a novel deep learning-based method for segmentation of BI-RADS 0 lesions in DBT. The key components of our framework are an encoding path for local-to-global feature extraction, and a decoding patch to expand the images. To address the issue of limited training data, in the training stage, we propose to sample patches not only in mass regions but also in non-mass regions. We utilize a Dice-like loss function in the proposed network to alleviate the class-imbalance problem. The preliminary results on 40 subjects show promise of our method. In addition to quantitative evaluation of the method, we present a visualization of the results that demonstrate both the performance of the algorithm as well as the difficulty of the task at hand.
Approximately 25% of patients with ductal carcinoma in situ (DCIS) diagnosed from core needle biopsy are subsequently upstaged to invasive cancer at surgical excision. Identifying patients with occult invasive disease is important as it changes treatment and precludes enrollment in active surveillance for DCIS. In this study, we investigated upstaging of DCIS to invasive disease using deep features. While deep neural networks require large amounts of training data, the available data to predict DCIS upstaging is sparse and thus directly training a neural network is unlikely to be successful. In this work, a pre-trained neural network is used as a feature extractor and a support vector machine (SVM) is trained on the extracted features. We used the dynamic contrast-enhanced (DCE) MRIs of patients at our institution from January 1, 2000, through March 23, 2014 who underwent MRI following a diagnosis of DCIS. Among the 131 DCIS patients, there were 35 patients who were upstaged to invasive cancer. Area under the ROC curve within the 10-fold cross-validation scheme was used for validation of our predictive model. The use of deep features was able to achieve an AUC of 0.68 (95% CI: 0.56-0.78) to predict occult invasive disease. This preliminary work demonstrates the promise of deep features to predict surgical upstaging following a diagnosis of DCIS.
Breast tumor segmentation based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) remains an active as well as a challenging problem. Previous studies often rely on manual annotation for tumor regions, which is not only time-consuming but also error-prone. Recent studies have shown high promise of deep learning-based methods in various segmentation problems. However, these methods are usually faced with the challenge of limited number (e.g., tens or hundreds) of medical images for training, leading to sub-optimal segmentation performance. Also, previous methods cannot efficiently deal with prevalent class-imbalance problems in tumor segmentation, where the number of voxels in tumor regions is much lower than that in the background area. To address these issues, in this study, we propose a mask-guided hierarchical learning (MHL) framework for breast tumor segmentation via fully convolutional networks (FCN). Our strategy is first decomposing the original difficult problem into several sub-problems and then solving these relatively simpler sub-problems in a hierarchical manner. To precisely identify locations of tumors that underwent a biopsy, we further propose an FCN model to detect two landmarks defined on nipples. Finally, based on both segmentation probability maps and our identified landmarks, we proposed to select biopsied tumors from all detected tumors via a tumor selection strategy using the pathology location. We validate our MHL method using data for 272 patients, and achieve a mean Dice similarity coefficient (DSC) of 0.72 in breast tumor segmentation. Finally, in a radiogenomic analysis, we show that a previously developed image features show a comparable performance for identifying luminal A subtype when applied to the automatic segmentation and a semi-manual segmentation demonstrating a high promise for fully automated radiogenomic analysis in breast cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.