KEYWORDS: Breast, Breast density, Mammography, Data modeling, Education and training, Performance modeling, Deep learning, Reliability, Histograms, Cancer
PurposeBreast density is associated with the risk of developing cancer and can be automatically estimated using deep learning models from digital mammograms. Our aim is to evaluate the capacity and reliability of such models to predict density from low-dose mammograms taken to enable risk estimates for younger women.ApproachWe trained deep learning models on standard-dose and simulated low-dose mammograms. The models were then tested on a mammography dataset with paired standard- and low-dose images. The effect of different factors (including age, density, and dose ratio) on the differences between predictions on standard and low doses is analyzed. Methods to improve performance are assessed, and factors that reduce the model quality are demonstrated.ResultsWe showed that, although many factors have no significant effect on the quality of low-dose density prediction, both density and breast area have an impact. The correlation between density predictions on low- and standard-dose images of breasts with the largest breast area is 0.985 (0.949 to 0.995), whereas that with the smallest is 0.882 (0.697 to 0.961). We also demonstrated that averaging across craniocaudal-mediolateral oblique (CC-MLO) images and across repeatedly trained models can improve predictive performance.ConclusionsLow-dose mammography can be used to produce density and risk estimates that are comparable to standard-dose images. Averaging across CC-MLO and model predictions should improve this performance. The model quality is reduced when making predictions on denser and smaller breasts.
The prevention and early detection of breast cancer hinges on precise prediction of individual breast cancer risk. Whilst well-established clinical risk factors can be used to stratify the population into risk groups, the addition of genetic information and breast density has been shown to improve prediction. Deep learning based approach have been shown to automatically extract complex information from images. However, this is a challenging area of research, partly due to the lack of data within the field, therefore there is scope for novel approaches. Our method uses Multiple Instance Learning in tandem with attention in order to make accurate, short-term risk predictions from full-sized mammograms taken prior to the detection of cancer. This approach ensures small features like calcifications are not lost in a downsizing process and the whole mammogram is analysed effectively. An attention pooling mechanism is designed to highlight patches of increased importance and improve performance. We also use transfer learning in order to utilise a rich source of screen-detected cancers and evaluate whether a model trained to detect cancers in mammograms allows us also to predict risk in priors. Our model achieves an AUC of 0.620 (0.585,0.657) in cancer-free screening mammograms of women who went on to a screen-detected or interval cancer between 5 and 55 months later, including for common breast cancer risk factors. Additionally, our model is able to discriminate interval cancers at an AUC of 0.638 (0.572, 0.703) and highlights the potential for such a model to be used alongside national screening programmes.
Accurate prediction of individual breast cancer risk paves the way for personalised prevention and early detection. Whilst well-established clinical risk factors can be used to stratify the population into risk groups, the addition of genetic information and breast density has been shown to improve prediction. Machine learning enabled automatic risk prediction provides key advantages over existing methods such as the ability to extract more complex information from mammograms. However, this is a challenging area of research, partly due to the lack of data within the field, therefore there is scope for novel approaches. Our method uses Multiple Instance Learning in tandem with attention in order to make accurate, short-term risk predictions from full-sized mammograms taken prior to the detection of cancer. This approach ensures small features like calcifications are not lost in a downsizing process and the whole mammogram is analysed effectively. An attention pooling mechanism is designed to highlight patches of increased importance and improve performance. Additionally, this increases the interpretability of our model as important patches can be shown in a saliency map. We also use transfer learning in order to utilise a rich source of screen-detected cancers and evaluate whether a model trained to detect cancers in mammograms allows us also to predict risk in priors. Our model achieves an AUC of 0.635 (0.600,0.669) in cancer-free screening mammograms of women who went on to a screen-detected or interval cancer between 5 and 55 months and an AUC of 0.804 (0.777,0.830) in screen-detected cancers.
KEYWORDS: Education and training, Breast density, Deep learning, Data modeling, Mammography, Feature extraction, Linear regression, Performance modeling, Cancer, Image processing
PurposeMammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist scores to make accurate and consistent predictions.ApproachWe used a dataset of almost 160,000 mammograms, each with two independent density scores made by expert medical practitioners. We used two pretrained deep networks and adapted them to produce feature vectors, which were then used for both linear and nonlinear regression to make density predictions. We also simulated an “optimal method,” which allowed us to compare the quality of our results with a simulated upper bound on performance.ResultsOur deep learning method produced estimates with a root mean squared error (RMSE) of 8.79 ± 0.21. The model estimates of cancer risk perform at a similar level to human experts, within uncertainty bounds. We made comparisons between different model variants and demonstrated the high level of consistency of the model predictions. Our modeled “optimal method” produced image predictions with a RMSE of between 7.98 and 8.90 for cranial caudal images.ConclusionWe demonstrated a deep learning framework based upon a transfer learning approach to make density estimates based on radiologists’ visual scores. Our approach requires modest computational resources and has the potential to be trained with limited quantities of data.
Breast density is an important breast cancer risk factor related to decreased mammography sensitivity and as an independent risk factor. This research aims to establish the distribution of breast density in the Saudi screening population and to identify the relationship between visual and automated breast density methods. Screening mammograms from 2905 cancer-free women were retrospectively collected from the Saudi National Breast Cancer Screening Programme. Breast density of screening mammograms were assessed visually by 11 radiologists using the Breast Imaging and Reporting Data System (BIRADS) 5th edition and Visual Analogue Scale (VAS), and by automated methods; predicted VAS processed (pVASprocessed), predicted VAS raw (pVASraw) and VolparaTM. The relationship between breast density methods was assessed using the intra-class coefficient (ICC) and weighted kappa (κ). Results indicated that around one-third of Saudi women of screening age had high breast density (BI-RADS C/D: 31.5% or Volpara Density Grade (VDG) C/D: 29.0%). Full screening mammograms from 1022 women were used to assess the relationship between all methods. Predicted VAS estimates of percent density were generally lower than VAS. The highest ICC was between VAS and pVASraw (ICC=0.86, 95% CI 0.84-0.88). For categorical breast density methods, VDG 5th edition showed fair agreement with BI-RADS 5th edition (κ=0.35, 95% CI 0.29-0.39). In conclusion, this study shows the majority of Saudi women of screening age have low breast density as shown by visual and automated methods, and there is a positive relationship between visual and automated methods, being strongest for VAS and pVASraw.
Estimation of breast density for cancer risk prediction is generally achieved by analysis of full-field digital mammograms. Conventional digital mammography should be avoided if possible in young women because of concerns about potential cancer induction, particularly in those with dense breasts who receive higher doses. This precludes repeated examinations over a short timescale to assess density change. We assess whether density can be accurately estimated in low dose mammograms with one-tenth of the standard dose, with the aim of providing a safe and effective method for use in younger women which is suitable for serial density measurement. We present analysis of data from an on-going clinical trial in which both standard and low dose mammograms are acquired under the same compression. We used both an existing convolutional neural network model designed to estimate breast density and a new model developed using a transfer learning approach. We then applied three methods to estimate density on the low dose mammograms: training on a different mammogram dataset; using simulated low dose data; and training directly on low dose mammograms using cross-validation. Pearson correlation coefficients between measurements on full dose and low dose mammograms ranged from 0.92 to 0.98 with the root mean squared error ranging between 3.37 and 7.27. Our results indicate that accurate density measurements can be made using low dose mammograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.