KEYWORDS: Breast, Image segmentation, Education and training, Visualization, Magnetic resonance imaging, Tumors, Statistical analysis, Binary data, Image classification, Breast cancer
PurposeCurrent clinical assessment qualitatively describes background parenchymal enhancement (BPE) as minimal, mild, moderate, or marked based on the visually perceived volume and intensity of enhancement in normal fibroglandular breast tissue in dynamic contrast-enhanced (DCE)-MRI. Tumor enhancement may be included within the visual assessment of BPE, thus inflating BPE estimation due to angiogenesis within the tumor. Using a dataset of 426 MRIs, we developed an automated method to segment breasts, electronically remove lesions, and calculate scores to estimate BPE levels.ApproachA U-Net was trained for breast segmentation from DCE-MRI maximum intensity projection (MIP) images. Fuzzy c-means clustering was used to segment lesions; the lesion volume was removed prior to creating projections. U-Net outputs were applied to create projection images of both, affected, and unaffected breasts before and after lesion removal. BPE scores were calculated from various projection images, including MIPs or average intensity projections of first- or second postcontrast subtraction MRIs, to evaluate the effect of varying image parameters on automatic BPE assessment. Receiver operating characteristic analysis was performed to determine the predictive value of computed scores in BPE level classification tasks relative to radiologist ratings.ResultsStatistically significant trends were found between radiologist BPE ratings and calculated BPE scores for all breast regions (Kendall correlation, p<0.001). Scores from all breast regions performed significantly better than guessing (p<0.025 from the z-test). Results failed to show a statistically significant difference in performance with and without lesion removal. BPE scores of the affected breast in the second postcontrast subtraction MIP after lesion removal performed statistically greater than random guessing across various viewing projections and DCE time points.ConclusionsResults demonstrate the potential for automatic BPE scoring to serve as a quantitative value for objective BPE level classification from breast DCE-MR without the influence of lesion enhancement.
In the study, we first introduce a novel AI-based system (MOM-ClaSeg) for multiple abnormality/disease detection and diagnostic report generation on PA/AP CXR images, which was recently developed by applying augmented Mask RCNN deep learning and Decision Fusion Networks. We then evaluate performance of MOM-ClaSeg system in assisting radiologists in image interpretation and diagnostic report generation through a multi-reader-multi-case (MRMC) study. A total of 33,439 PA/AP CXR images were retrospectively collected from 15 hospitals, which were divided into an experimental group of 25,840 images and a control group of 7,599 images with and without processed by MOM-ClaSeg system, respectively. In this MRMC study, 6 junior radiologists (5~10yr experience) first read these images and generated initial diagnostic reports with/without viewing MOM-ClaSeg-generated results. Next, the initial reports were reviewed by 2 senior radiologists (>15yr experience) to generate final reports. Additionally, 3 consensus expert radiologists (>25yr experience) reconciled the potential difference between initial and final reports. Comparison results showed that usingMOM-ClaSeg, diagnostic sensitivity of junior radiologists increased significantly by 18.67% (from 70.76% to 89.43%, P<0.001), while specificity decreased by 3.36% (from 99.49% to 96.13%, P<0.001). Average reading/diagnostic time in experimental group with MOM-ClaSeg reduced by 27.07% (P<0.001), with a particularly significant reduction of 66.48% (P<0.001) on abnormal images, indicating that MOM-ClaSeg system has potential for fast lung abnormality/disease triaging. This study demonstrates feasibility of applying the first AI-based system to assist radiologists in image interpretation and diagnostic report generation, which is a promising step toward improved diagnostic performance and productivity in future clinical practice.
Radiomic features have been shown to add predictive power to risk-assessment models for future kidney decline in patients with autosomal dominant polycystic kidney disease (ADPKD), and these previous studies utilized only one imaging timepoint. Delta radiomics incorporates image features from multiple imaging timepoints and the change in features across these timepoints. There is a need to investigate utilizing delta radiomics in ADPKD and the benefit of incorporating delta-features in risk-assessment models, taking advantage of imaging that is clinically indicated for these patients. A cohort of 152 patients and their respective T2-weighted fat saturated magnetic resonance imaging coronal images were used to predict progression to chronic kidney disease (CKD) stage 3A, 3B, and >30% reduction in estimated glomerular filtration rate (eGFR) at 60-months follow up using radiomic features at (1) baseline imaging, (2) 24-months follow up, and (3) 24-months delta-features. Prediction models utilizing delta radiomics alone yielded area under the receiver operating characteristic curve (AUC) values that ranged from 0.52-0.55, versus using radiomic features from single timepoints and combined timepoint AUC values 0.67-0.76. Trends of increasing AUC values were observed when combining clinical and radiomics features for predicting CKD stage 3A and >30% reduction in eGFR.
In recent years, there has been significant interest in evaluating perivascular spaces (PVS) due to their potential to characterize multiple neurological conditions. In this study, we demonstrated the potential to improve PVS evaluation at scale by introducing an AI algorithm to review identified PVS candidates and remove false positives on T2-weighted MRI. For this task, we were able to achieve an AUC of 0.93 +/- 0.02 while identifying optimal model characteristics and exploring areas of future improvement and investigation, thus demonstrating the potential for AI to replace human review in PVS quantification at scale.
KEYWORDS: Image segmentation, Breast, 3D image processing, 3D imaging standards, Magnetic resonance imaging, Education and training, Cross validation, 3D modeling, 3D image enhancement, Artificial intelligence
PurposeGiven the dependence of radiomic-based computer-aided diagnosis artificial intelligence on accurate lesion segmentation, we assessed the performances of 2D and 3D U-Nets in breast lesion segmentation on dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) relative to fuzzy c-means (FCM) and radiologist segmentations.ApproachUsing 994 unique breast lesions imaged with DCE-MRI, three segmentation algorithms (FCM clustering, 2D and 3D U-Net convolutional neural networks) were investigated. Center slice segmentations produced by FCM, 2D U-Net, and 3D U-Net were evaluated using radiologist segmentations as truth, and volumetric segmentations produced by 2D U-Net slices and 3D U-Net were compared using FCM as a surrogate reference standard. Fivefold cross-validation by lesion was conducted on the U-Nets; Dice similarity coefficient (DSC) and Hausdorff distance (HD) served as performance metrics. Segmentation performances were compared across different input image and lesion types.Results2D U-Net outperformed 3D U-Net for center slice (DSC, HD p < 0.001) and volume segmentations (DSC, HD p < 0.001). 2D U-Net outperformed FCM in center slice segmentation (DSC p < 0.001). The use of second postcontrast subtraction images showed greater performance than first postcontrast subtraction images using the 2D and 3D U-Net (DSC p < 0.05). Additionally, mass segmentation outperformed nonmass segmentation from first and second postcontrast subtraction images using 2D and 3D U-Nets (DSC, HD p < 0.001).ConclusionsResults suggest that 2D U-Net is promising in segmenting mass and nonmass enhancing breast lesions from first and second postcontrast subtraction MRIs and thus could be an effective alternative to FCM or 3D U-Net.
KEYWORDS: COVID 19, Chest imaging, Data modeling, Deep learning, Education and training, Performance modeling, Radiography, Medical imaging, Machine learning, Diseases and disorders
PurposeImage-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients’ needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning.ApproachThe dataset consisted of 8357 CXR exams from 5046 COVID-19–positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients’ needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19–positive patients who required intensive care following the imaging exam and those who did not.ResultsOur proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images.ConclusionsThis AI/ML prediction model for patients’ needs for intensive care has the potential to support both clinical decision-making and resource management.
Building Machine Learning models from scratch for clinical applications can be a challenging undertaking requiring varied levels of expertise. Given the heterogeneous nature of input data and specific task requirements, even seasoned developers and researchers may occasionally run into issues with incompatible frameworks. This is further complicated in the context of diagnostic radiology. Therefore, we developed the CRP10 AI Application Interface (CRP10AII) as a component of the Medical Imaging and Data Resource Center (MIDRC) to deliver a modular and user-friendly software solution that can efficiently address the demands of physicians, early AI developers to explore, train, and test AI algorithms. The CRP10AII tool is python-based web framework that is connected to the data commons (GEN3) that offers the ability to develop AI models from scratch or employ pre-trained models while allowing for visualization and interpretation of the predictions of the AI model. Here, we evaluate the capabilities of CRP10AII and its related human-API interaction factors. This evaluation aims at investigating various aspects of the API, including:(i) robustness and ease of use; (ii) visualization help in decision making tasks; and (iii) necessary further improvements for initial AI researchers with different medical imaging and AI expertise levels. Users initially experienced trouble testing the API; however, the problems have since been fixed as a result of additional explanations. The user evaluation's findings demonstrate that although different options on the API are generally easy to understand, use, and helpful in decision-making tasks for users with and without experience in medical imaging and AI, there are differences in how the various options are understood and used by users. We were also able to collect additional inputs, such as increasing information fields and including more interactive components to make the API more generalizable and customizable.
To assess a Smart Imagery Framing and Truthing (SIFT) system in automatically labeling and annotating chest X-ray (CXR) images with multiple diseases as an assist to radiologists on multi-disease CXRs. SIFT system was developed by integrating a convolutional neural network based-augmented MaskR-CNN and a multi-layer perceptron neural network. It is trained with images containing 307,415 ROIs representing 69 different abnormalities and 67,071 normal CXRs. SIFT automatically labels ROIs with a specific type of abnormality, annotates fine-grained boundary, gives confidence score, and recommends other possible types of abnormality. An independent set of 178 CXRs containing 272 ROIs depicting five different abnormalities including pulmonary tuberculosis, pulmonary nodule, pneumonia, COVID-19, and fibrogenesis was used to evaluate radiologists’ performance based on three radiologists in a double-blinded study. The radiologist first manually annotated each ROI without SIFT. Two weeks later, the radiologist annotated the same ROIs with SIFT aid to generate final results. Evaluation of consistency, efficiency and accuracy for radiologists with and without SIFT was conducted. After using SIFT, radiologists accept 93% SIFT annotated area, and variation across annotated area reduce by 28.23%. Inter-observer variation improves by 25.27% on averaged IOU. The consensus true positive rate increases by 5.00% (p=0.16), and false positive rate decreases by 27.70% (p<0.001). The radiologist’s time to annotate these cases decreases by 42.30%. Performance in labelling abnormalities statistically remains the same. Independent observer study showed that SIFT is a promising step toward improving the consistency and efficiency of annotation, which is important for improving clinical X-ray diagnostic and monitoring efficiency.
Opportunistic disease detection on low-dose CT (LDCT) scans is desirable due to expanded use of LDCT scans for lung cancer screening. In this study, a machine learning paradigm called multiple instance learning (MIL) is investigated for emphysema detection in LDCT scans. The top performing method was able to achieve an area under the ROC curve of 0.93 +/- 0.04 in the task of detecting emphysema in the LDCT scans through a combination of MIL and transfer learning. These results suggest that there is strong potential for the use of MIL in automatic, opportunistic LDCT scan assessment.
The coronavirus disease 2019 (COVID-19) pandemic has wreaked havoc across the world. It also created a need for the urgent development of efficacious predictive diagnostics, specifically, artificial intelligence (AI) methods applied to medical imaging. This has led to the convergence of experts from multiple disciplines to solve this global pandemic including clinicians, medical physicists, imaging scientists, computer scientists, and informatics experts to bring to bear the best of these fields for solving the challenges of the COVID-19 pandemic. However, such a convergence over a very brief period of time has had unintended consequences and created its own challenges. As part of Medical Imaging Data and Resource Center initiative, we discuss the lessons learned from career transitions across the three involved disciplines (radiology, medical imaging physics, and computer science) and draw recommendations based on these experiences by analyzing the challenges associated with each of the three associated transition types: (1) AI of non-imaging data to AI of medical imaging data, (2) medical imaging clinician to AI of medical imaging, and (3) AI of medical imaging to AI of COVID-19 imaging. The lessons learned from these career transitions and the diffusion of knowledge among them could be accomplished more effectively by recognizing their associated intricacies. These lessons
learned in the transitioning to AI in the medical imaging of COVID-19 can inform and enhance future AI applications, making the whole of the transitions more than the sum of each discipline, for confronting an emergency like the COVID-19 pandemic or solving emerging problems in biomedicine.
Systemic lupus erythematosus (SLE) is a complex, systemic autoimmune disease with many clinical presentations including lupus nephritis (LuN), or chronic inflammation of the kidneys. Current therapies for SLE are only modestly effective, highlighting the need to better understand networks of immune cells in SLE and LuN. In this work, we assess the performance of two convolutional neural network (CNN) architectures –Mask R-CNN and U-Net— in the task of instance segmentation of 5 immune-cell classes in 31 LuN biopsies. Each biopsy was stained for myeloid dendritic cells (mDCs), plasmacytoid dendritic cells (pDCs), B cells, and two populations of T cells, then imaged on a Leica SP8 fluorescence confocal microscope. Two instances of Mask R-CNN were trained on manually segmented images—one on lymphocytes (T cells and B cells), and one on DCs (pDCs and mDCs)—resulting in an average network sensitivities of 0.88 ± 0.04 and 0.82 ± 0.03, respectively. Five U-Nets, one for each of the five individual cell classes, were trained resulting in an average sensitivity of 0.85 ± 0.09 across all cell classes. Mask R-CNN yielded fewer false positives for all cell classes, with an average precision of 0.76 ± 0.03 compared to the U-Net object-level average precision of 0.43 ± 0.12. Overall, Mask R-CNN was more robust than the U-Net for segmenting cells in immunofluorescence images of kidney biopsies from lupus nephritis patients.
Lupus nephritis (LuN) is an inflammatory kidney disease characterized by the infiltration of immune cells into the kidney, including T-cells, B-cells, and dendritic cells. Here, we combine high-dimensional immunofluorescence microscopy with computer vision to identify and segment multiple populations of cells. A U-Net was trained to segment CD4+ T-cells in high-resolution LuN biopsy images and subsequently used to make CD4+ T-cell predictions on a test-set from a lower-resolution, high-dimensional LuN dataset. This produced higher precision, but lower recall and intersection over union for cells in the low-resolution dataset. Further application of U-Nets to immune cell segmentation will be discussed.
Several disease states, including cancer and autoimmunity, are characterized by the infiltration of large populations of immune cells into organ tissue. The degree and composition of these invading cells have been correlated with patient outcomes, suggesting that the intercellular interactions occurring in inflamed tissue play a role in pathology. Immunofluorescence staining paired with confocal microscopy produce detailed visualizations of these interactions. Applying computer vision and machine learning methods to the resulting images allows for robust quantification of immune infiltrates. We are developing an analytical pipeline to assess the immune environments of two distinct disease states: lupus nephritis and triple-negative breast cancer (TNBC). Biopsies of inflamed kidney tissue (lupus) and tumors (TNBC) were stained and imaged for panels of 20 markers using a strip-reprobe technique. This set of markers interrogates populations of T-cells, B-cells, and antigen presenting cells. To detect T cells, we first trained a U-Net to segment CD3+CD4+ T-cells in images of lupus biopsies and achieved an object-level precision of 0.855 and recall of 0.607 on an independent test set. We then evaluated the generalizability of this network to CD3+CD8+ T cells in lupus nephritis and CD3+CD4+ T cells in TNBC, and the extent to which fine-tuning the network improved performance for these cell types. We found that recall increased moderately with finetuning, while precision did not. Further work will focus on developing robust methods of segmenting a larger variety of T cell markers in both tissue contexts with high fidelity.
In recent years, the assessment of non-cancerous diseases on low-dose CT scans for lung cancer screening has gained significant attention. Osteoporosis shares many risk factors with lung cancer and the thoracic and upper lumbar vertebrae can be visualized within the screening scan range, making diagnosis of osteoporosis viable. However, manual assessments can be time-consuming and inconsistent. This study investigates the application of radiomic texture analysis (RTA) for the automatic detection of osteoporosis. In this retrospective analysis of 613 CT screening scans acquired from the I-ELCAP database, quantitative features, including those based on intensity, texture, and frequency, were extracted from ROIs manually placed within the central body of the T6 and L1 vertebrae on axial images. The top 4 individually performing features were selected to train an SVM classifier for the classification between osteoporotic, abnormal, and normal vertebrae. Performance was evaluated through ROC analysis, with areas under the ROC curve of 0.925 +/- 0.054 for the T6 vertebra and 0.847 +/- 0.092 for L1. Further, RTA was compared to a radiologist’s visual diagnosis and a previously published automatic bone mineral density calculation approach. The RTA technique correlated well with the automatic BMD calculation, with Pearson linear correlation coefficients of - 0.752 and -0.653 for the T6 and L1 vertebrae, respectively, and qualitative comparison to the visual assessment was favorable. Based on the ROC results and the correlation with previously established methods, RTA demonstrated significant potential in quantifying vertebral bodies in axial CT screening scans and characterizing the vertebral disease state.
Computer-aided diagnosis based on features extracted from medical images relies heavily on accurate lesion segmentation before feature extraction. Using 994 unique breast lesions imaged with dynamic contrast-enhanced (DCE) MRI, several segmentation algorithms were investigated. The first method is fuzzy c-means (FCM), a well-established unsupervised clustering algorithm used on breast MRIs. The second and third methods are based on the convolutional neural network U-Net, a widely-used deep learning method for image segmentation—for two- or three-dimensional MRI data, respectively. The purpose of this study was twofold—1) to assess the performances of 2D (slice-by-slice) and 3D U-Nets in breast lesion segmentation on DCE-MRI trained with FCM segmentations, and 2) compare their performance to that of FCM. Center slice segmentations produced by FCM, 2D U-Net, and 3D U-Net were evaluated using radiologist segmentations as truth, and volumetric segmentations produced by 2D U-Net (slice-by-slice) and 3D U-Net were compared using FCM as a surrogate truth. Five-fold cross-validation was conducted on the U-Nets and Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used as performance metrics. Although 3D U-Net performed well, 2D U-Net outperformed 3D U-Net, both for center slice (DSC p=4.13×10-9, HD p=1.40×10-2) and volume segmentations (DSC p=2.72×10-83, HD p=2.28×10-10). Additionally, 2D U-Net outperformed FCM in center slice segmentation in terms of DSC (p=1.09×10-7). The results suggest that 2D U-Net is promising in segmenting breast lesions and could be an effective alternative to FCM.
Purpose: Given the recent COVID-19 pandemic and its stress on global medical resources, presented here is the development of a machine intelligent method for thoracic computed tomography (CT) to inform management of patients on steroid treatment.
Approach: Transfer learning has demonstrated strong performance when applied to medical imaging, particularly when only limited data are available. A cascaded transfer learning approach extracted quantitative features from thoracic CT sections using a fine-tuned VGG19 network. The extracted slice features were axially pooled to provide a CT-scan-level representation of thoracic characteristics and a support vector machine was trained to distinguish between patients who required steroid administration and those who did not, with performance evaluated through receiver operating characteristic (ROC) curve analysis. Least-squares fitting was used to assess temporal trends using the transfer learning approach, providing a preliminary method for monitoring disease progression.
Results: In the task of identifying patients who should receive steroid treatments, this approach yielded an area under the ROC curve of 0.85+- 0.10 and demonstrated significant separation between patients who received steroids and those who did not. Furthermore, temporal trend analysis of the prediction score matched expected progression during hospitalization for both groups, with separation at early timepoints prior to convergence near the end of the duration of hospitalization.
Conclusions: The proposed cascade deep learning method has strong clinical potential for informing clinical decision-making and monitoring patient treatment.
KEYWORDS: Image segmentation, Medical imaging, Binary data, Computed tomography, Radiology, Heart, Medical physics, Medicine, Network architectures, Medical research
In general, deep networks are biased by the truth data provided to the network in training. Many recent studies are focused on understanding and avoiding biases in deep networks so that they can be corrected in future predictions. Particularly, as deep networks experience increased implementation, it is important that biases are explored to understand where predictions can fail. One potential source of bias is in the truth data provided to the network. For example, if a training set consists of only white males, it is likely that predictive performance will be improved on a testing set of white males than a testing set of African-American females. The U-Net architecture is a deep network that has seen widespread use over recent years, particularly for medical imaging segmentation tasks. The network is trained using a binary mask delineating the object to be segmented, which is typically produced using manual or semi-automated methods. It is possible for the manual/semi-automated method to yield biased truth, thus, the purpose of our study is to evaluate the impact of varying truth data as provided by two different observers on U-Net segmentation performance. Additionally, a common problem in medical imaging research is a lack of data, forcing many studies to be performed with insufficient datasets. However, the U-Net has been shown to achieve sufficient segmentation performance on small training set sizes, thus we also investigate the impact of training set size on U-Net performance for a simple segmentation task in low-dose thoracic CT scans. This is also conducted to support that the results produced in the observer variability section of this study are not caused by lack of sufficient training data.
Low-dose thoracic CT (LDCT) screening has provided a low risk method of obtaining useful clinical information with lower quality images. Coronary artery calcium (CAC), a major indicator of cardiovascular disease, can be visualized on LDCT images. Additionally, the U-Net architecture has shown outstanding performance in a variety of medical imaging tasks, including image segmentation. Thus, the purpose of this study is to analyze the potential of the U-Net in the classification and localization of CAC in LDCT images. This study was performed with 814 LDCT cases with radiologist-determined CAC severity scores. A total of 3 truth masks per image were manually created for training of 3 U-Nets that were used to define the CAC search region, identify CAC candidates, and eliminate false positives (namely, aortic valve calcifications). Additionally, a single network tasked with only CAC candidate identification was tested to assess the need for different sections of the cascade of U-Nets. All CAC segmentation tasks were assessed using ROC analysis in the task of determining whether or not a case contained any CAC. The area under the ROC curve (AUC) as a performance metric and preliminary analysis showed potential for extension to a full classification task. CAC detection through the total cascade of 3 networks achieved and AUC of 0.97 +/- 0.01. Overall, this study shows significant promise in the localization and classification of CAC in LDCT images using a cascade of U-Nets.
Deep Learning is expanding in the detection and diagnosis of abnormalities , including coronary artery calcification (CAC), in CT. CACs can also be visualized on low-dose thoracic screening CTs (LDCT), and thus, in this study, deep learning is investigated for the detection of CACs and assessment of their severity on LDCT images. The study dataset included 863 LDCT cases, each assigned a case severity score, which is related to the Agatston score, ranging between 0 and 12 (0 = no CAC present, 12 = severe CACs). Within the cardiac region, 224 × 224 pixel ROIs were extracted from each CT slice and input to a convolutional neural network (CNN). CNN-based features were extracted using a pre-trained VGG19 and merged with a support vector machine (SVM) yielding a slice likelihood score of the presence of CACs . Case prediction scores were obtained by using the maximum and mean scores of all slices belonging to that case. Area under the ROC curve (AUC) was used as a metric to assess the discrimination performance level. Using a randomly selected subset of images containing similar amounts of each severity subtype, the SVM performed better using the max slice score per case (AUC = 0.79, standard error = 0.03). While this AUC value does not reach those found in similar studies for diagnostic CT and cardiac CT angiography, this study demonstrates potential for deep learning use in LDCT screening programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.