PurposeAutomatic comprehensive reporting of coronary artery disease (CAD) requires anatomical localization of the coronary artery pathologies. To address this, we propose a fully automatic method for extraction and anatomical labeling of the coronary artery tree using deep learning.ApproachWe include coronary CT angiography (CCTA) scans of 104 patients from two hospitals. Reference annotations of coronary artery tree centerlines and labels of coronary artery segments were assigned to 10 segment classes following the American Heart Association guidelines. Our automatic method first extracts the coronary artery tree from CCTA, automatically placing a large number of seed points and simultaneous tracking of vessel-like structures from these points. Thereafter, the extracted tree is refined to retain coronary arteries only, which are subsequently labeled with a multi-resolution ensemble of graph convolutional neural networks that combine geometrical and image intensity information from adjacent segments.ResultsThe method is evaluated on its ability to extract the coronary tree and to label its segments, by comparing the automatically derived and the reference labels. A separate assessment of tree extraction yielded an F1 score of 0.85. Evaluation of our combined method leads to an average F1 score of 0.74.ConclusionsThe results demonstrate that our method enables fully automatic extraction and anatomical labeling of coronary artery trees from CCTA scans. Therefore, it has the potential to facilitate detailed automatic reporting of CAD.
As deep learning has been widely used for computer aided-diagnosis, we wished to know whether attribution maps obtained using gradient back-propagation could correctly highlight the patterns of disease subtypes discovered by a deep learning classifier. As the correctness of attribution maps is difficult to evaluate directly on medical images, we used synthetic data mimicking the difference between brain MRI of controls and demented patients to design more reliable evaluation criteria of attribution maps. We demonstrated that attribution maps may mix the regions associated with different subtypes for small data sets while they could accurately characterize both subtypes using a large data set. We then proposed simple data augmentation techniques and showed that they could improve the coherence of the explanations for a small data set. .
KEYWORDS: Convolutional neural networks, Image segmentation, Arteries, Magnetic resonance imaging, 3D image processing, Visualization, Data centers, Image processing, Data modeling
Carotid artery vessel wall thickness measurement is an essential step in the monitoring of patients with atherosclerosis. This requires accurate segmentation of the vessel wall, i.e., the region between an artery’s lumen and outer wall, in black-blood magnetic resonance (MR) images. Commonly used convolutional neural networks (CNNs) for semantic segmentation are suboptimal for this task as their use does not guarantee a contiguous ring-shaped segmentation. Instead, in this work, we cast vessel wall segmentation as a multi-task regression problem in a polar coordinate system. For each carotid artery in each axial image slice, we aim to simultaneously find two non-intersecting nested contours that together delineate the vessel wall. CNNs applied to this problem enable an inductive bias that guarantees ring-shaped vessel walls. Moreover, we identify a problem-specific training data augmentation technique that substantially affects segmentation performance. We apply our method to segmentation of the internal and external carotid artery wall, and achieve top-ranking quantitative results in a public challenge, i.e., a median Dice similarity coefficient of 0.813 for the vessel wall and median Hausdorff distances of 0.552 mm and 0.776 mm for lumen and outer wall, respectively. Moreover, we show how the method improves over a conventional semantic segmentation approach. These results show that it is feasible to automatically obtain anatomically plausible segmentations of the carotid vessel wall with high accuracy.
Accurately labeled segments of the coronary artery trees are important for diagnostic reporting of coronary artery disease. As current automatic reporting tools do not consider anatomical segment labels, accurate automatic solutions for deriving these labels would be of great value. We propose an automatic method for labeling segments in coronary artery trees represented by centerlines automatically extracted from CCTA images. Using the connectivity between the centerlines, we construct a tree graph. Coronary artery segments are defined as edges of this graph and characterized by location and geometry features. The constructed coronary artery tree is transformed into a linegraph and used as input to a graph attention network, which is trained to classify labels of coronary artery segments. The method was evaluated on 71 CCTA images, achieving an F1-score of 92.4% averaged over all patients and segments. The results indicate that graph attention networks are suitable for coronary artery tree labeling.
4D cardiac CT angiography (CCTA) images acquired for transcatheter aortic valve implantation (TAVI) planning provide a wealth of information about the morphology of the heart throughout the cardiac cycle. We propose a deep learning method to automatically segment the cardiac chambers and myocardium in 4D CCTA. We obtain automatic segmentations in 472 patients and use these to automatically identify end-systolic (ES) and end-diastolic (ED) phases, and to determine the left ventricular ejection fraction (LVEF). Our results show that automatic segmentation of cardiac structures through the cardiac cycle is feasible (median Dice similarity coefficient 0.908, median average symmetric surface distance 1.59 mm). Moreover, we demonstrate that these segmentations can be used to accurately identify ES and ED phases (bias [limits of agreement] of 1.81 [-11.0; 14.7]% and -0.02 [-14.1; 14.1]%). Finally, we show that there is correspondence between LVEF values determined from CCTA and echocardiography (-1.71 [-25.0; 21.6]%). Our automatic deep learning approach to segmentation has the potential to routinely extract functional information from 4D CCTA.
Treatment of patients with obstructive coronary artery disease is guided by the functional significance of a coronary artery stenosis. Fractional flow reserve (FFR), measured during invasive coronary angiography (ICA), is considered the references standard to define the functional significance of a coronary stenosis. Here, we present an automatic method for non-invasive detection of patients with functionally significant coronary artery stenosis based on 126 retrospectively collected cardiac CT angiography (CCTA) scans with corresponding FFR measurement. We combine our previous works for the analysis of the complete coronary artery tree and the LV myocardium by applying convolutional autoencoders (CAEs) to characterize both, coronary arteries and the LV myocardium. To handle the varying number of coronary arteries in a patient, an attention-based neural network is trained to obtain a combined representation per patient, and to classify each patient according to the presence of functionally significant stenosis. Cross-validation experiments resulted in an average area under the receiver operating characteristic curve of 0.74, and showed that the proposed combined analysis outperformed the analysis of the coronary arteries or the LV myocardium alone. This may lead to a reduction in the number of unnecessary ICA procedures in patients with suspected obstructive CAD.
Accurate MR-to-CT synthesis is a requirement for MR-only work flows in radiotherapy (RT) treatment planning. In recent years, deep learning-based approaches have shown impressive results in this field. However, to prevent downstream errors in RT treatment planning, it is important that deep learning models are only applied to data for which they are trained and that generated synthetic CT (sCT) images do not contain severe errors. For this, a mechanism for online quality control should be in place. In this work, we use an ensemble of sCT generators and assess their disagreement as a measure of uncertainty of the results. We show that this uncertainty measure can be used for two kinds of online quality control. First, to detect input images that are outside the expected distribution of MR images. Second, to identify sCT images that were generated from suitable MR images but potentially contain errors. Such automatic online quality control for sCT generation is likely to become an integral part of MR-only RT work flows.
Convolutional neural networks (CNNs) have been widely and successfully used for medical image segmentation. However, CNNs are typically considered to require large numbers of dedicated expert-segmented training volumes, which may be limiting in practice. This work investigates whether clinically obtained segmentations which are readily available in picture archiving and communication systems (PACS) could provide a possible source of data to train a CNN for segmentation of organs-at-risk (OARs) in radiotherapy treatment planning. In such data, delineations of structures deemed irrelevant to the target clinical use may be lacking. To overcome this issue, we use multi-label instead of multi-class segmentation. We empirically assess how many clinical delineations would be sufficient to train a CNN for the segmentation of OARs and find that increasing the training set size beyond a limited number of images leads to sharply diminishing returns. Moreover, we find that by using multi-label segmentation, missing structures in the reference standard do not have a negative effect on overall segmentation accuracy. These results indicate that segmentations obtained in a clinical workflow can be used to train an accurate OAR segmentation model.
Accurate segmentation of the left ventricle myocardium in cardiac CT angiography (CCTA) is essential for e.g. the assessment of myocardial perfusion. Automatic deep learning methods for segmentation in CCTA might suffer from differences in contrast-agent attenuation between training and test data due to non-standardized contrast administration protocols and varying cardiac output. We propose augmentation of the training data with virtual mono-energetic reconstructions from a spectral CT scanner which show different attenuation levels of the contrast agent. We compare this to an augmentation by linear scaling of all intensity values, and combine both types of augmentation. We train a 3D fully convolutional network (FCN) with 10 conventional CCTA images and corresponding virtual mono-energetic reconstructions acquired on a spectral CT scanner, and evaluate on 40 CCTA scans acquired on a conventional CT scanner. We show that training with data augmentation using virtual mono-energetic images improves upon training with only conventional images (Dice similarity coefficient (DSC) 0.895 ± 0.039 vs. 0.846 ± 0.125). In comparison, training with data augmentation using linear scaling improves the DSC to 0.890 ± 0.039. Moreover, combining the results of both augmentation methods leads to a DSC of 0.901 ± 0.036, showing that both augmentations lead to different local improvements of the segmentations. Our results indicate that virtual mono-energetic images improve the generalization of an FCN used for myocardium segmentation in CCTA images.
Current state-of-the-art deep learning segmentation methods have not yet made a broad entrance into the clinical setting in spite of high demand for such automatic methods. One important reason is the lack of reliability caused by models that fail unnoticed and often locally produce anatomically implausible results that medical experts would not make. This paper presents an automatic image segmentation method based on (Bayesian) dilated convolutional networks (DCNN) that generate segmentation masks and spatial uncertainty maps for the input image at hand. The method was trained and evaluated using segmentation of the left ventricle (LV) cavity, right ventricle (RV) endocardium and myocardium (Myo) at end-diastole (ED) and end-systole (ES) in 100 cardiac 2D MR scans from the MICCAI 2017 Challenge (ACDC). Combining segmentations and uncertainty maps and employing a human-in-the-loop setting, we provide evidence that image areas indicated as highly uncertain, regarding the obtained segmentation, almost entirely cover regions of incorrect segmentations. The fused information can be harnessed to increase segmentation performance. Our results reveal that we can obtain valuable spatial uncertainty maps with low computational effort using DCNNs.
Morphological analysis and identification of pathologies in the aorta are important for cardiovascular diagnosis and risk assessment in patients. Manual annotation is time-consuming and cumbersome in CT scans acquired without contrast enhancement and with low radiation dose. Hence, we propose an automatic method to segment the ascending aorta, the aortic arch and the thoracic descending aorta in low-dose chest CT without contrast enhancement. Segmentation was performed using a dilated convolutional neural network (CNN), with a receptive field of 131 × 131 voxels, that classified voxels in axial, coronal and sagittal image slices. To obtain a final segmentation, the obtained probabilities of the three planes were averaged per class, and voxels were subsequently assigned to the class with the highest class probability. Two-fold cross-validation experiments were performed where ten scans were used to train the network and another ten to evaluate the performance. Dice coefficients of 0.83 ± 0.07, 0.86 ± 0.06 and 0.88 ± 0.05, and Average Symmetrical Surface Distances (ASSDs) of 2.44 ± 1.28, 1.56 ± 0.68 and 1.87 ± 1.30 mm were obtained for the ascending aorta, the aortic arch and the descending aorta, respectively. The results indicate that the proposed method could be used in large-scale studies analyzing the anatomical location of pathology and morphology of the thoracic aorta.
CT attenuation correction (CTAC) images acquired with PET/CT visualize coronary artery calcium (CAC) and enable CAC quantification. CAC scores acquired with CTAC have been suggested as a marker of cardiovascular disease (CVD). In this work, an algorithm previously developed for automatic CAC scoring in dedicated cardiac CT was applied to automatic CAC detection in CTAC. The study included 134 consecutive patients undergoing 82-Rb PET/CT. Low-dose rest CTAC scans were acquired (100 kV, 11 mAs, 1.4mm×1.4mm×3mm voxel size). An experienced observer defined the reference standard with the clinically used intensity level threshold for calcium identification (130 HU). Five scans were removed from analysis due to artifacts. The algorithm extracted potential CAC by intensity-based thresholding and 3D connected component labeling. Each candidate was described by location, size, shape and intensity features. An ensemble of extremely randomized decision trees was used to identify CAC. The data set was randomly divided into training and test sets. Automatically identified CAC was quantified using volume and Agatston scores. In 33 test scans, the system detected on average 469mm3/730mm3 (64%) of CAC with 36mm3 false positive volume per scan. The intraclass correlation coefficient for volume scores was 0.84. Each patient was assigned to one of four CVD risk categories based on the Agatston score (0-10, 11-100, 101-400, <400). The correct CVD category was assigned to 85% of patients (Cohen's linearly weighted κ0.82). Automatic detection of CVD risk based on CAC scoring in rest CTAC images is feasible. This may enable large scale studies evaluating clinical value of CAC scoring in CTAC data.
Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D.
In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs — heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it.
Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was ≥0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.
Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.
Presence of coronary artery calcium (CAC) is a strong and independent predictor of cardiovascular events. We present a system using a forest of extremely randomized trees to automatically identify and quantify CAC in routinely acquired cardiac non-contrast enhanced CT. Candidate lesions the system could not label with high certainty were automatically identified and presented to an expert who could relabel them to achieve high scoring accuracy with minimal effort. The study included 200 consecutive non-contrast enhanced ECG-triggered cardiac CTs (120 kV, 55 mAs, 3 mm section thickness). Expert CAC annotations made as part of the clinical routine served as the reference standard. CAC candidates were extracted by thresholding (130 HU) and 3-D connected component analysis. They were described by shape, intensity and spatial features calculated using multi-atlas segmentation of coronary artery centerlines from ten CTA scans. CAC was identified using a randomized decision tree ensemble classifier in a ten-fold stratified cross-validation experiment and quantified in Agatston and volume scores for each patient. After classification, candidates with posterior probability indicating uncertain labeling were selected for further assessment by an expert. Images with metal implants were excluded. In the remaining 164 images, Spearman's p between automatic and reference scores was 0.94 for both Agatston and volume scores. On average 1.8 candidate lesions per scan were subsequently presented to an expert. After correction, Spearman's p was 0.98. We have described a system for automatic CAC scoring in cardiac CT images which is able to effectively select difficult examinations for further refinement by an expert.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.