Recognizing breast density as a critical risk factor for breast cancer, traditionally assessed through subjective radiological evaluation within the BI-RADS framework, this research seeks to mitigate inter-observer variability through automated, quantitative analysis. The transition to DBT offers a quasi-3D perspective potentially enhancing the accuracy of BD assessments yet faces limitations with current FDA-cleared methods for volumetric breast density (VBD) estimation. Addressing these challenges, our work introduces a fully automated computational tool leveraging deep learning to accurately assess VBD from 3D DBT images without reliance on raw 2D data. Employing retrospective data compliant with privacy regulations, this study utilized DBT screening examinations from the Hospital of the University of Pennsylvania. The development of a three-class segmentation model, based on the U-Net architecture, was undertaken to differentiate between non-breast/background, fatty breast tissue, and dense breast tissue in DBT images. A novel two-stage training method was devised to enhance model performance, particularly in avoiding mis-segmentation issues common in high-resolution medio-lateral oblique images. This approach first utilized resized images for global shape information recognition, followed by refined segmentation using a 3D U-Net on filtered input, emphasizing accurate dense tissue identification. Our model demonstrated exemplary performance, with the Dice score—a critical metric for evaluating segmentation accuracy—revealing substantial agreement between the model's predictions and actual data. Validation of the model's effectiveness in breast cancer risk estimation was conducted through a case-control study, demonstrating a statistically significant association between DL-estimated VBD and cancer diagnosis. Additional factors, including BMI and age at screening, were also found to be significantly associated with cancer status, underscoring the multifactorial nature of breast cancer risk. The model's predictive capability was further evidenced by an AUC of 0.63, indicating good performance. The study's implications are profound, offering a clinically significant tool for personalized breast cancer risk prediction and potentially enhancing screening strategies across diverse populations.
The coronavirus disease 2019 (COVID-19) pandemic had a major impact on global health and was associated with millions of deaths worldwide. During the pandemic, imaging characteristics of chest X-ray (CXR) and chest computed tomography (CT) played an important role in the screening, diagnosis and monitoring the disease progression. Various studies suggested that quantitative image analysis methods including artificial intelligence and radiomics can greatly boost the value of imaging in the management of COVID-19. However, few studies have explored the use of longitudinal multi-modal medical images with varying visit intervals for outcome prediction in COVID-19 patients. This study aims to explore the potential of longitudinal multimodal radiomics in predicting the outcome of COVID-19 patients by integrating both CXR and CT images with variable visit intervals through deep learning. 2274 patients who underwent CXR and/or CT scans during disease progression were selected for this study. Of these, 946 patients were treated at the University of Pennsylvania Health System (UPHS) and the remaining 1328 patients were acquired at Stony Brook University (SBU) and curated by the Medical Imaging and Data Resource Center (MIDRC). 532 radiomic features were extracted with the Cancer Imaging Phenomics Toolkit (CaPTk) from the lung regions in CXR and CT images at all visits. We employed two commonly used deep learning algorithms to analyze the longitudinal multimodal features, and evaluated the prediction results based on the area under the receiver operating characteristic curve (AUC). Our models achieved testing AUC scores of 0.816 and 0.836, respectively, for the prediction of mortality.
The aim of this retrospective case-cohort study was to perform additional validation of an artificial intelligence (AI)-driven breast cancer risk model in a racially diverse cohort of women undergoing screening. We included 176 breast cancer cases with non-actionable mammographic screening exams 3 months to 2 years prior to cancer diagnosis and a random sample of 4,963 controls from women with non-actionable mammographic screening exams and at least one-year of negative follow-up (Hospital University Pennsylvania, PA, USA; 9/1/2010-1/6/2015). A risk score for each woman was extracted from full-field digital mammography (FFDM) images via an AI risk prediction model, previously developed and validated in a Swedish screening cohort. The performance of the AI risk model was assessed via age-adjusted area under the ROC curve (AUC) for the entire cohort, as well as for the two largest racial subgroups (White and Black). The performance of the Gail 5-year risk model was also evaluated for comparison purposes. The AI risk model demonstrated an AUC for all women = 0.68 95% CIs [0.64, 0.72]; for White = 0.67 [0.61, 0.72]; for Black = 0.70 [0.65, 0.76]. The AI risk model significantly outperformed the Gail risk model for all women (AUC = 0.68 vs AUC = 0.55, p<0.01) and for Black women (AUC = 0.71 vs AUC = 0.48, p<0.01), but not for White women (AUC = 0.66 vs AUC = 0.61, p=0.38). Preliminary findings in an independent dataset suggest a promising performance of the AI risk prediction model in a racially diverse breast cancer screening cohort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.