KEYWORDS: Breast, Breast density, Mammography, Data modeling, Education and training, Performance modeling, Deep learning, Reliability, Histograms, Cancer
PurposeBreast density is associated with the risk of developing cancer and can be automatically estimated using deep learning models from digital mammograms. Our aim is to evaluate the capacity and reliability of such models to predict density from low-dose mammograms taken to enable risk estimates for younger women.ApproachWe trained deep learning models on standard-dose and simulated low-dose mammograms. The models were then tested on a mammography dataset with paired standard- and low-dose images. The effect of different factors (including age, density, and dose ratio) on the differences between predictions on standard and low doses is analyzed. Methods to improve performance are assessed, and factors that reduce the model quality are demonstrated.ResultsWe showed that, although many factors have no significant effect on the quality of low-dose density prediction, both density and breast area have an impact. The correlation between density predictions on low- and standard-dose images of breasts with the largest breast area is 0.985 (0.949 to 0.995), whereas that with the smallest is 0.882 (0.697 to 0.961). We also demonstrated that averaging across craniocaudal-mediolateral oblique (CC-MLO) images and across repeatedly trained models can improve predictive performance.ConclusionsLow-dose mammography can be used to produce density and risk estimates that are comparable to standard-dose images. Averaging across CC-MLO and model predictions should improve this performance. The model quality is reduced when making predictions on denser and smaller breasts.
Breast density is an important factor in assessing individual breast cancer risk. We aim to identify women at increased risk of developing breast cancer before they enter routine screening, using mammography in combination with known risk factors. This will enable targeting of preventive therapies and personalised screening. To reduce radiation risk, this paper examines whether density measurements in one breast or mammographic view could be used to accurately reflect individual risk. We analysed breast cancer risk using breast density in a 1:3 case-control dataset of mammograms from the Predicting Risk of Cancer at Screening Study (PROCAS). Breast density was measured using pVAS, an AI-based approach. Cancer risk in low and high breast density groups was compared using conditional logistic regression. High breast density was independently associated with increased breast cancer risk. Women in the highest breast density quintile averaged across all views had an Odds Ratio (OR) of 4.16 (95% CI 2.90-5.97) compared to those in the lowest. A similar OR was found in both the left 3.77 (95% CI 2.68-5.31) and right 4.52 (95% CI 3.12-6.55) breasts individually. ORs were also significant for each individual view: right mediolateral oblique (MLO) 4.19 (2.92–6.00), right craniocaudal (CC) 4.40 (3.09–6.27), left MLO 3.27 (2.34–4.56) and left CC 3.65 (2.60–5.11). The ability to predict breast cancer risk due to increased breast density was achieved using one breast and even one mammographic view. This provides the possibility of a pre-screening risk assessment using fewer images and therefore less radiation.
KEYWORDS: Breast density, Breast cancer, Mammography, Breast, Cancer, Brain-machine interfaces, Statistical analysis, Education and training, Visualization, Ovarian cancer
Introduction: Breast cancer is the most common female cancer worldwide; however ethnic differences have been observed in both prevalence and prognosis, with Black women often having less favorable outcomes. Increased breast density is an independent risk factor for breast cancer and reduces the efficacy of mammographic screening. We investigate how it relates to ethnicity, to facilitate the provision of appropriate screening and advice to all women. Method: We use data from the UK Predicting Risk of Cancer at Screening (PROCAS) study. This involved completion of a questionnaire to obtain personal risk factor information during routine breast screening. Mammographic density was assessed using Visual Analogue Scales (VAS), and these scores were used to train an AI-based density measure, pVAS, which we applied to raw mammographic data from 41,241 women in PROCAS. Analysis of covariance was used to assess the relationship between ethnicity and breast density after adjusting for age, body mass index (BMI), menopausal status, hormone replacement therapy (HRT) use, parity, alcohol consumption, and family history of breast cancer. Pairwise comparisons for each ethnic group were performed using a Bonferroni correction. Results: 91.0% of the study population were white, 1.6% Asian, 1.1% Black and 1.0% Jewish. Jewish women had higher breast density than all other ethnic groups studied (p<0.001), with a mean pVAS of 34.8% (95% CI 33.6-36.1). Asian women had a mean density of 31.4% (95% CI 30.4-32.4) and significantly denser breasts than White women who had a mean pVAS density of 28.6% (95% CI 28.4-28.7). Conclusion: Previous research has reported mixed results. The relationship between risk factors for breast cancer are complex, and data not always complete, making this a challenging area of research. Our results support published evidence that some groups have increased density, and this relationship should be considered to ensure equity in screening and diagnosis.
The prevention and early detection of breast cancer hinges on precise prediction of individual breast cancer risk. Whilst well-established clinical risk factors can be used to stratify the population into risk groups, the addition of genetic information and breast density has been shown to improve prediction. Deep learning based approach have been shown to automatically extract complex information from images. However, this is a challenging area of research, partly due to the lack of data within the field, therefore there is scope for novel approaches. Our method uses Multiple Instance Learning in tandem with attention in order to make accurate, short-term risk predictions from full-sized mammograms taken prior to the detection of cancer. This approach ensures small features like calcifications are not lost in a downsizing process and the whole mammogram is analysed effectively. An attention pooling mechanism is designed to highlight patches of increased importance and improve performance. We also use transfer learning in order to utilise a rich source of screen-detected cancers and evaluate whether a model trained to detect cancers in mammograms allows us also to predict risk in priors. Our model achieves an AUC of 0.620 (0.585,0.657) in cancer-free screening mammograms of women who went on to a screen-detected or interval cancer between 5 and 55 months later, including for common breast cancer risk factors. Additionally, our model is able to discriminate interval cancers at an AUC of 0.638 (0.572, 0.703) and highlights the potential for such a model to be used alongside national screening programmes.
KEYWORDS: Education and training, Breast density, Deep learning, Data modeling, Mammography, Feature extraction, Linear regression, Performance modeling, Cancer, Image processing
PurposeMammographic breast density is one of the strongest risk factors for cancer. Density assessed by radiologists using visual analogue scales has been shown to provide better risk predictions than other methods. Our purpose is to build automated models using deep learning and train on radiologist scores to make accurate and consistent predictions.ApproachWe used a dataset of almost 160,000 mammograms, each with two independent density scores made by expert medical practitioners. We used two pretrained deep networks and adapted them to produce feature vectors, which were then used for both linear and nonlinear regression to make density predictions. We also simulated an “optimal method,” which allowed us to compare the quality of our results with a simulated upper bound on performance.ResultsOur deep learning method produced estimates with a root mean squared error (RMSE) of 8.79 ± 0.21. The model estimates of cancer risk perform at a similar level to human experts, within uncertainty bounds. We made comparisons between different model variants and demonstrated the high level of consistency of the model predictions. Our modeled “optimal method” produced image predictions with a RMSE of between 7.98 and 8.90 for cranial caudal images.ConclusionWe demonstrated a deep learning framework based upon a transfer learning approach to make density estimates based on radiologists’ visual scores. Our approach requires modest computational resources and has the potential to be trained with limited quantities of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.