In this study we used a large previously built database of 2,892 mammograms and 31,650 single mammogram radiologists’ assessments to simulate the impact of replacing one radiologist by an AI system in a double reading setting. The double human reading scenario and the double hybrid reading scenario (second reader replaced by an AI system) were simulated via bootstrapping using different combinations of mammograms and radiologists from the database. The main outcomes of each scenario were sensitivity, specificity and workload (number of necessary readings). The results showed that when using AI as a second reader, workload can be reduced by 44%, sensitivity remains similar (difference -0.1%; 95% CI = - 4.1%, 3.9%), and specificity increases by 5.3% (P<0.001). Our results suggest that using AI as a second reader in a double reading setting as in screening programs could be a strategy to reduce workload and false positive recalls without affecting sensitivity.
Computer-aided detection aims to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. DM exams are generated by devices from different vendors, with diverse characteristics between and even within vendors. Physical properties of these devices and postprocessing of the images can greatly influence the resulting mammogram. This results in the fact that a deep learning model trained on data from one vendor cannot readily be applied to data from another vendor. This paper investigates the use of tailored transfer learning methods based on adversarial learning to tackle this problem. We consider a database of DM exams (mostly bilateral and two views) generated by Hologic and Siemens vendors. We analyze two transfer learning settings: 1) unsupervised transfer, where Hologic data with soft lesion annotation at pixel level and Siemens unlabelled data are used to annotate images in the latter data; 2) weak supervised transfer, where exam level labels for images from the Siemens mammograph are available. We propose tailored variants of recent state-of-the-art methods for transfer learning which take into account the class imbalance and incorporate knowledge provided by the annotations at exam level. Results of experiments indicate the beneficial effect of transfer learning in both transfer settings. Notably, at 0.02 false positives per image, we achieve a sensitivity of 0.37, compared to 0.30 of a baseline with no transfer. Results indicate that using exam level annotations gives an additional increase in sensitivity.
KEYWORDS: Mammography, Breast, Image processing, Magnetic resonance imaging, Breast cancer, Convolutional neural networks, Digital imaging, Convolution, Classification systems
Breast density is an important factor in breast cancer screening. Methods exist to measure the volume of dense breast tissue from 2D mammograms. However, these methods can only be applied to raw mammograms. Breast density classification methods that have been developed for processed mammograms are commonly based on radiologist Breast Imaging and Reporting Data System (BI-RADS) annotations. Unfortunately, such labels are subjective and may introduce personal bias and inter-reader discrepancy. In order to avoid such limitations, this paper presents a method for estimation of percent dense tissue volume (PDV) from processed full field digital mammograms (FFDM) using a deep learning approach. A convolutional neural network (CNN) was implemented to carry out a regression task of estimating PDV using density measurement on raw FFDM as a ground truth. The dataset used for training, validation, and testing (Set A) includes over 2000 clinical cases from 3 different vendors. Our results show a high correlation of the predicted PDV to raw measurements, with a Spearman’s correlation coefficient of r=0.925. The CNN was also tested on an independent set of 97 clinical cases (Set B) for which PDV measurements from FFDM and MRI were available. CNN predictions on Set B showed a high correlation with both raw FFDM and MRI data (r=0.897 and r=0.903, respectively). Set B had radiologist annotated BI-RADS labels, which agreed with the estimated values to a high degree, showing the ability of our CNN to make a distinction between different BI-RADS categories comparable to methods applied to raw mammograms.
Breast density is an important risk factor for the development of breast cancer. During the women lifetime, the breast glandularity varies due to hormonal changes. In particular, around menopause, the glandular tissue tends to decrease. The aim of this paper is to evaluate temporal breast density changes using density maps, provided by the commercial software VolparaTM. The dataset is composed of 563 mammograms from 55 patients (aged between 24 and 75 years old). The time frame between two acquisitions varies from less than one year to 4 years. Pairs of mammograms are registered using the morphons registration algorithm, in order to evaluate the structural similarity of the parenchymal distribution between the two acquisitions. To provide a fair comparison, the results are divided considering the patient age during the first mammographic acquisition and the time between the two studies. To evaluate the changes in breast density, local and global measures, such as the rate of change of the volumetric breast density, the histogram intersection between two density maps and the normalized cross-correlation after the registration, are considered. The results show significant differences in the statistics, mainly focused on patients younger than 30 years old and ranged between 56 and 65 years old with respect to those in the adulthood (between 30 and 55 years old). Similarly, the time between the two mammographic acquisitions shows a significant difference for patients older than 56 years old considering one and two year of difference between the two studies.
Computer-aided detection or decision support systems aim to improve breast cancer screening programs by helping radiologists to evaluate digital mammography (DM) exams. Commonly such methods proceed in two steps: selection of candidate regions for malignancy, and later classification as either malignant or not. In this study, we present a candidate detection method based on deep learning to automatically detect and additionally segment soft tissue lesions in DM. A database of DM exams (mostly bilateral and two views) was collected from our institutional archive. In total, 7196 DM exams (28294 DM images) acquired with systems from three different vendors (General Electric, Siemens, Hologic) were collected, of which 2883 contained malignant lesions verified with histopathology. Data was randomly split on an exam level into training (50%), validation (10%) and testing (40%) of deep neural network with u-net architecture. The u-net classifies the image but also provides lesion segmentation. Free receiver operating characteristic (FROC) analysis was used to evaluate the model, on an image and on an exam level. On an image level, a maximum sensitivity of 0.94 at 7.93 false positives (FP) per image was achieved. Similarly, per exam a maximum sensitivity of 0.98 at 7.81 FP per image was achieved. In conclusion, the method could be used as a candidate selection model with high accuracy and with the additional information of lesion segmentation.
KEYWORDS: Image segmentation, Digital breast tomosynthesis, Breast, Mammography, Systems modeling, Detection and tracking algorithms, Tissues, Convolutional neural networks, 3D modeling, Image processing algorithms and systems
Digital breast tomosynthesis (DBT) has superior detection performance than mammography (DM) for population-based breast cancer screening, but the higher number of images that must be reviewed poses a challenge for its implementation. This may be ameliorated by creating a twodimensional synthetic mammographic image (SM) from the DBT volume, containing the most relevant information. When creating a SM, it is of utmost importance to have an accurate lesion localization detection algorithm, while segmenting fibroglandular tissue could also be beneficial. These tasks encounter an extra challenge when working with images in the medio-lateral oblique view, due to the presence of the pectoral muscle, which has similar radiographic density. In this work, we present an automatic pectoral muscle segmentation model based on a u-net deep learning architecture, trained with 136 DBT images acquired with a single system (different BIRADS ® densities and pathological findings). The model was tested on 36 DBT images from that same system resulting in a dice similarity coefficient (DSC) of 0.977 (0.967-0.984). In addition, the model was tested on 125 images from two different systems and three different modalities (DBT, SM, DM), obtaining DSCs between 0.947 and 0.970, a range determined visually to provide adequate segmentations. For reference, a resident radiologist independently annotated a mix of 25 cases obtaining a DSC of 0.971. The results suggest the possibility of using this model for inter-manufacturer DBT, DM and SM tasks that benefit from the segmentation of the pectoral muscle, such as SM generation, computer aided detection systems, or patient dosimetry algorithms.
KEYWORDS: Magnetic resonance imaging, Breast, Computer aided diagnosis and therapy, Breast cancer, Image segmentation, Convolutional neural networks, Computing systems, 3D acquisition, Cancer
Current computer-aided detection (CADe) systems for contrast-enhanced breast MRI rely on both spatial information obtained from the early-phase and temporal information obtained from the late-phase of the contrast enhancement. However, late-phase information might not be available in a screening setting, such as in abbreviated MRI protocols, where acquisition is limited to early-phase scans. We used deep learning to develop a CADe system that exploits the spatial information obtained from the early-phase scans. This system uses three-dimensional (3-D) morphological information in the candidate locations and the symmetry information arising from the enhancement differences of the two breasts. We compared the proposed system to a previously developed system, which uses the full dynamic breast MRI protocol. For training and testing, we used 385 MRI scans, containing 161 malignant lesions. Performance was measured by averaging the sensitivity values between 1/8—eight false positives. In our experiments, the proposed system obtained a significantly (p=0.008) higher average sensitivity (0.6429±0.0537) compared with that of the previous CADe system (0.5325±0.0547). In conclusion, we developed a CADe system that is able to exploit the spatial information obtained from the early-phase scans and can be used in screening programs where abbreviated MRI protocols are used.
KEYWORDS: Magnetic resonance imaging, 3D modeling, Breast, Mammography, Image registration, Finite element methods, X-rays, X-ray imaging, 3D acquisition, Tissues
Patient-specific finite element (FE) models of the breast have received increasing attention due to the potential capability of fusing images from different modalities. During the Magnetic Resonance Imaging (MRI) to X-ray mammography registration procedure, the FE model is compressed mimicking the mammographic acquisition. Subsequently, suspicious lesions in the MRI volume can be projected into the 2D mammographic space. However, most registration algorithms do not provide the reverse information, avoiding to obtain the 3D geometrical information from the lesions localized in the mammograms. In this work we introduce a fast method to localize the 3D position of the lesion within the MRI, using both cranio-caudal (CC) and medio-lateral oblique (MLO) mammographic projections, indexing the tetrahedral elements of the biomechanical model by means of an uniform grid. For each marked lesion in the Full-Field Digital Mammogram (FFDM), the X-ray path from source to the marker is calculated. Barycentric coordinates are computed in the tetrahedrons traversed by the ray. The list of elements and coordinates allows to localize two curves within the MRI and the closest point between both curves is taken as the 3D position of the lesion. The registration errors obtained in the mammographic space are 9.89 ± 3.72 mm in CC- and 8.04 ± 4.68 mm in MLO-projection and the error in the 3D MRI space is equal to 10.29 ± 3.99 mm. Regarding the uniform grid, it is computed spending between 0.1 and 0.7 seconds. The average time spent to compute the 3D location of a lesion is about 8 ms.
Automated three-dimensional breast ultrasound (ABUS) is a valuable adjunct to x-ray mammography for breast cancer screening of women with dense breasts. High image quality is essential for proper diagnostics and computer-aided detection. We propose an automated image quality assessment system for ABUS images that detects artifacts at the time of acquisition. Therefore, we study three aspects that can corrupt ABUS images: the nipple position relative to the rest of the breast, the shadow caused by the nipple, and the shape of the breast contour on the image. Image processing and machine learning algorithms are combined to detect these artifacts based on 368 clinical ABUS images that have been rated manually by two experienced clinicians. At a specificity of 0.99, 55% of the images that were rated as low quality are detected by the proposed algorithms. The areas under the ROC curves of the single classifiers are 0.99 for the nipple position, 0.84 for the nipple shadow, and 0.89 for the breast contour shape. The proposed algorithms work fast and reliably, which makes them adequate for online evaluation of image quality during acquisition. The presented concept may be extended to further image modalities and quality aspects.
Background parenchymal enhancement (BPE) observed in breast dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) has been identified as an important biomarker associated with risk for developing breast cancer. In this study, we present a fully automated framework for quantification of BPE. We initially segmented fibroglandular tissue (FGT) of the breasts using an improved version of an existing method. Subsequently, we computed BPEabs (volume of the enhancing tissue), BPErf (BPEabs divided by FGT volume) and BPErb (BPEabs divided by breast volume), using different relative enhancement threshold values between 1% and 100%. To evaluate and compare the previous and improved FGT segmentation methods, we used 20 breast DCE-MRI scans and we computed Dice similarity coefficient (DSC) values with respect to manual segmentations. For evaluation of the BPE quantification, we used a dataset of 95 breast DCE-MRI scans. Two radiologists, in individual reading sessions, visually analyzed the dataset and categorized each breast into minimal, mild, moderate and marked BPE. To measure the correlation between automated BPE values to the radiologists' assessments, we converted these values into ordinal categories and we used Spearman's rho as a measure of correlation. According to our results, the new segmentation method obtained an average DSC of 0.81 0.09, which was significantly higher (p<0.001) compared to the previous method (0.76 0.10). The highest correlation values between automated BPE categories and radiologists' assessments were obtained with the BPErf measurement (r=0.55, r=0.49, p<0.001 for both), while the correlation between the scores given by the two radiologists was 0.82 (p<0.001). The presented framework can be used to systematically investigate the correlation between BPE and risk in large screening cohorts.
Automated breast ultrasound (ABUS) is a 3D imaging technique which is rapidly emerging as a safe and relatively inexpensive modality for screening of women with dense breasts. However, reading ABUS examinations is very time consuming task since radiologists need to manually identify suspicious findings in all the different ABUS volumes available for each patient. Image analysis techniques to automatically link findings across volumes are required to speed up clinical workflow and make ABUS screening more efficient. In this study, we propose an automated system to, given the location in the ABUS volume being inspected (source), find the corresponding location in a target volume. The target volume can be a different view of the same study or the same view from a prior examination. The algorithm was evaluated using 118 linkages between suspicious abnormalities annotated in a dataset of ABUS images of 27 patients participating in a high risk screening program. The distance between the predicted location and the center of the annotated lesion in the target volume was computed for evaluation. The mean ± stdev and median distance error achieved by the presented algorithm for linkages between volumes of the same study was 7.75±6.71 mm and 5.16 mm, respectively. The performance was 9.54±7.87 and 8.00 mm (mean ± stdev and median) for linkages between volumes from current and prior examinations. The proposed approach has the potential to minimize user interaction for finding correspondences among ABUS volumes.
In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process.
In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.
A precise segmentation of breast tissue is often required for computer-aided diagnosis (CAD) of breast MRI.
Only a few methods have been proposed to automatically segment breast in MRI. Authors reported satisfactory
performance, but a fair comparison has not been done yet as all breast segmentation methods were evaluated on
their own data sets with different manual annotations. Moreover, breast volume overlap measures, which were
commonly used for evaluations, do not seem to be adequate to accurately quantify the segmentation qualities.
Breast volume overlap measures are not sensitive to small errors, such as local misalignments, because the
breast appears to be much larger than other structures. In this work, two atlas-based approaches and a breast
segmentation method based on Hessian sheetness filter are exhaustively evaluated and benchmarked on a data
set of 52 manually annotated breast MR images. Three quantitative measures including dense tissue error,
pectoral muscle error and pectoral surface distance are defined to objectively reflect the practical use of breast
segmentation in CAD methods. The evaluation measures provide important evidence to conclude that the three
evaluated techniques perform accurate breast segmentations. More specifically, the atlas-based methods appear
to be more precise, but require larger computation time than the sheetness-based breast segmentation approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.