Diagnosis of breast cancer is often achieved through expert radiologist examination of medical images such as mammograms. Computer-aided diagnosis (CADx) methods can be useful tools in the medical field with applications such as aiding radiologists in making diagnosis decisions. However, such CADx systems require a sufficient amount of data to train on, in conjunction with efficient machine learning techniques. Our Spatially Localized Ensembles Sparse Analysis using Deep Features (DF-SLESA) machine learning model uses local information of features extracted from deep neural networks to learn and classify breast imaging patterns based on sparse approximations. We have also developed a new technique of patch sampling for learning sparse approximations and making classification decisions that we denote as PatchSample decomposition. The PatchSample method differs from our previous approach, our BlockBoost method, in that larger dictionaries are constructed that hold not just spatial-specific information, but a larger collective of visual information from all locations in the region of interest (ROI). Of note is that we trained and tested our method on a merged dataset of mammograms obtained from two sources. Experimental results have reached up to 67.80% classification accuracy (ACC) and 73.21% area under the ROC curve (AUC) using PatchSample decomposition on a merged dataset consisting of the MLO view regions of interest of the MIAS and CBIS-DDSM datasets.
Breast cancer is the second most common type of cancer of women in the U.S. behind skin cancer. Early detection and characterization of breast masses is critical for effective diagnosis and treatment of breast cancer. Computer-aided breast mass characterization methods would help to improve the accuracy of diagnoses, their reproducibility, and the throughput of breast cancer screening workflows. In this work, we introduce sparse representations of deep learning features for separation of malignant from benign breast masses in mammograms. We expect that the use of deep feature-based dictionaries will produce better benign/malignant class separation than straightforward sparse representation techniques, and fine-tuned convolutional neural networks (CNNs). We performed 10- and 30-fold cross-validation experiments for classification of benign and malignant breast masses on the MIAS and DDSM mammographic datasets. The results show that the proposed deep feature sparse analysis produces better classification rates than conventional sparse representations and fine-tuned CNNs. The top areas under the curve (AUC) for the receiver operating curve are 80.64% for 10-fold and 97.44% for 30-fold cross-validation in MIAS, and 77.29% for 10-fold and 76.02% for 30-fold cross-validation in DDSM. The main advantages of this approach are that it employs dictionaries of deep network features that are sparse in nature and that it alleviates the need for large volumes of training data and lengthy training procedures. The interesting results from this work prompt further exploration of the relationship between sparse optimization problems and deep learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.