Open Access
23 December 2017 Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography
Yan Ling Yong, Li Kuo Tan, Robert A. McLaughlin, Kok Han Chee, Yih Miin Liew
Author Affiliations +
Abstract
Intravascular optical coherence tomography (OCT) is an optical imaging modality commonly used in the assessment of coronary artery diseases during percutaneous coronary intervention. Manual segmentation to assess luminal stenosis from OCT pullback scans is challenging and time consuming. We propose a linear-regression convolutional neural network to automatically perform vessel lumen segmentation, parameterized in terms of radial distances from the catheter centroid in polar space. Benchmarked against gold-standard manual segmentation, our proposed algorithm achieves average locational accuracy of the vessel wall of 22 microns, and 0.985 and 0.970 in Dice coefficient and Jaccard similarity index, respectively. The average absolute error of luminal area estimation is 1.38%. The processing rate is 40.6 ms per image, suggesting the potential to be incorporated into a clinical workflow and to provide quantitative assessment of vessel lumen in an intraoperative time frame.

1.

Introduction

Cardiovascular disease is the leading cause of death globally.1 Atherosclerosis of the coronary artery disease results in remodeling and narrowing of the arteries that supply oxygenated blood to the heart, and thus may lead to myocardial infarction. Common interventional approaches include percutaneous coronary intervention and coronary artery bypass graft surgery.2 The choice of treatment will vary depending on a range of clinical factors, including morphology of the vessel wall, and degree of stenosis as quantified by cross-sectional luminal area.

Imaging of the vasculature, specifically coronary arteries, plays a critical role in assessment of these treatment options. X-ray computed coronary angiography and cardiac magnetic resonance imaging allow noninvasive imaging but are very limited in their ability to assess the structure of the artery walls.3 Invasive techniques, such as intravascular ultrasound (IVUS),4 provide cross-sectional imaging of the artery walls, but with limited spatial resolution.5 Intravascular optical coherence tomography (IVOCT) lacks the image penetration depth of IVUS but provides far higher resolution imaging, allowing visualization and quantification of critical structures such as the fibrous cap of atherosclerotic plaques and delineation of the arterial wall layers.68 In addition, IVOCT is finding application in imaging coronary stents to assess vascular healing and potential restenosis.9,10

Delineation of the vessel lumen in IVOCT images enables quantification of the luminal cross-sectional area. Such delineation has also been used as the first step toward plaque segmentation11,12 and the assessment of stent apposition.13 However, manual delineation is impractical due to the high number of cross-sectional scans acquired in a single IVOCT pullback scan, typically >100 images. Automatic delineation of the lumen wall is challenging due to various reasons. Nonhomogenous intensity, blood residue, the presence and absence of different types of stents, irregular lumen shapes, image artifacts, and bifurcations are some of these challenges.14

Previous delineation approaches have employed edge detection filters15 and spline-fitting to segment the lumen boundary and stent struts.16 Other approaches have included the use of wavelet transforms and mathematical morphology,17 Otsu’s automatic thresholding and intersection of radial lines with lumen boundaries,11,12 Markov-random fields models,18 and light back-scattering methods.19

Deep learning is a type of machine learning algorithm utilizing artificial neural networks (ANN), which in recent years has been found to be useful for medical image processing. Input features are processed through a multilayered network, defined by a network of weights and biases, to produce a nonlinear output. During training, these weight and bias values are optimized by minimizing a loss function, mapping training input to known target output values. Convolutional neural networks (CNNs) are a particular subset of ANN that operate on input with regular structure: they apply convolutional filters to the input of each layer, and have proven to be highly effective in image classification tasks.2022

Most neural network applications in image processing are image-based classification models, where the network is trained to classify each pixel in the input image into one of several classes. The use of this technique has been extended into a variety of medical image segmentation applications. For example, CNNs have been used to classify lung image patches in interstitial lung disease23 as well as head and neck cancer in hyperspectral imaging.24 CNNs have also been applied in retinal layer and microvasculature segmentation of retinal OCT images,25,26 and arterial layers segmentation in patients with Kawasaki disease.27 These CNN methods employ the commonly used feature classification approach.

An alternative approach is to train the network to perform linear regression, in contrast to feature classification. Recently, a linear-regression CNN model has been demonstrated to outperform conventional CNN in cardiac left ventricle segmentation.28 CNN regression was used to infer the radial distances between the left ventricle centerpoint and the endo- and epicardial contours in polar space. This indicates the possibility of an alternative application of CNNs for image segmentation in comparable medical applications.

In this paper, we propose a method of coronary lumen segmentation for clinical assessment and treatment planning of coronary artery stenosis using a linear-regression CNN. We test the algorithm on in vivo clinical images and assess it against gold-standard manual segmentations. This is the first use of a linear-regression CNN approach to the automated delineation of the vessel lumen in IVOCT images. This paper is structured as follows: Sec. 2 provides experimental details and an explanation of the CNN architecture and implementation; Sec. 3 provides accuracy results benchmarked against interobserver variability of manual segmentation, and an assessment of the impact of varying the amount of training data; and Secs. 4 and 5 conclude with a discussion of the potential clinical impact and limitations of such an approach.

2.

Materials and Method

2.1.

IVOCT Data Acquisition and Preparation for Training and Testing

The data used for this study comprise IVOCT-acquired images of patients diagnosed with coronary artery disease. The IVOCT images were acquired from the University of Malaya Medical Center (UMMC) catheterization laboratory using two standard clinical systems: Illumien and Illumien Optis IVOCT Systems (St. Jude Medical). Both systems have an axial resolution of 15  μm and a scan diameter of 10 mm. The Ilumien system and the Ilumien Optis system have maximum frame rates of 100 and 180 fps, respectively. The study was approved by the University of Malaya Medical Ethics Committee (Ref: 20158-1554), and all patient data were anonymized.

In total 64 pullbacks were acquired from 28 patients [25%/75% male/female, with mean age 59.71 (±9.61) years] using Dragonfly Duo Imaging Catheter with 2.7 F crossing profile when the artery was under contrast flushing (Iopamiro® 370). The internal rotating fiber optic imaging core performed rotational motorized pullback scans for a length of 54 or 75 mm in 5 s. These scans include multiple pre- and poststented images of the coronary artery at different locations. These pullbacks were randomly assigned to one of two groups with a ratio of 7:3, i.e., 45 pullbacks were randomly designated as training sets and the remaining 19 as test sets. Excluding images depicting only the guide catheter, each pullback contains between 155 and 375 polar images. These images contain a heterogeneous mix of images with the absence or presence of stent struts (metal stents or bioresorbable stents or both), fibrous plaques, calcified plaques, lipid-rich plaques, ruptured plaques, thrombus, dissections, motion artifacts, bifurcations, and blood artifacts. The original size of each pullback frame was 984×496  pixels (axial × angular dimension), and was subsampled in both dimensions to 488×248  pixels to reduce training and processing time. For each image, raw intensity values were converted from linear scale to logarithmic scale before normalizing by mean and standard deviation.

Gold-standard segmentations were generated on both training and test sets by manual frame-by-frame delineation using ImageJ29 in Cartesian coordinates, according to the document of consensus,14 whereby a contour was drawn between the lumen and the leading edge of the intima. The contour was also manually drawn across the guidewire shadow and bifurcation at locations that best represent the underlying border of the main lumen, gauged by the adjacent slices. The manual contour of the lumen border for each image was subsequently converted to polar coordinates, smoothed and interpolated to 100 points using cubic B-spline interpolation method for CNN training and testing.

2.2.

Convolutional Neural Networks Regression Architecture and Implementation Details

Using our linear-regression CNN model, in each polar image we infer the radius parameter of the vessel wall at 100 equidistant radial locations, rather than the more conventional approach of classifying each pixel within the image. This has the advantage of avoiding the physiologically unrealistic results that may arise from segmentation of individual pixels. The lumen segmentation was parameterized in terms of radial distances from the center of the catheter in polar space.

The general flow of the proposed CNN model is shown in Fig. 1. Our network consists of a simple structure with four convolutional layers and three fully connected layers, including the final output layer. All polar images were padded circularly left and right before being windowed for input. The window dimension was 488×128  pixels centered on each individual radial point, therefore yielding 100 inputs and 100 evaluated radial distances per image.

Fig. 1

Overview of the linear-regression CNN segmentation system (refer to text for details).

JBO_22_12_126005_f001.png

The details of the network architecture are presented in Table 1. In the network architecture, a filter kernel of size 5×5×24 with boundary zero-padding was applied for all convolutional layers, yielding 24 feature maps at each layer. In the first layer, a stride of 2 was also applied along the angular dimension to reduce computational load. The first three layers were also max-pooled by size 2×2. Each fully connected layer contains 512 nodes. Exponential linear units30 were used as the activation functions for all layers, including both convolutional and fully connected layers, except the final layer. Dropout with keep probability of 0.75 was applied to the fully connected layers FC1 and FC2, to improve the robustness of the network.31 The final layer outputs a single value representative of the radial distance between the lumen border and the center of the catheter for the radial position being evaluated.

Table 1

Linear-regression CNN architecture for lumen segmentation at each windowed image. The output is the radial distance at the lumen border from the center of the catheter. CN, convolutional layer; FC, fully connected layer.

LayerInWeightsPoolingOut
CN1a488×128×11×5×5×242×2244×32×24
CN2244×32×2424×5×5×242×2122×16×24
CN3122×16×2424×5×5×242×261×8×24
CN461×8×2424×5×5×2461×8×24
FC11171211712×512—-512
FC2512512×512512
Out512512×11

aA stride of size 2 was applied on the angular dimension to reduce computational load.

The objective function used for the network training is the standard mean-squared error. Starting from a random initialization, the weight and bias parameters are iteratively minimized by calculating the mean squared error between the gold-standard radial distance and the output of the CNN training. The Adam stochastic gradient algorithm was used to perform the optimization, i.e., minimization, of the objective function.32 The network was trained stochastically with a mini-batch size of 100 at a base-learning rate of 0.005. The learning rate was halved every 50,000 runs. The training was stopped at 400,000 runs where convergence was observed (i.e., when the observed losses had ceased to improve for at least 100,000 runs). The trained weights and biases of the network, amounting to 6.3 million parameters, are subsequently used to predict the lumen contour on the test sets.

The neural network was designed in a Python (Python Software Foundation, Delaware) environment using the TensorFlow v1.0.1 machine learning framework (Google Inc., California). The execution of the network was performed on a Linux-based Intel i5-6500 CPU workstation with NVIDIA GeForce GTX1080 8GB GPU. The training time for 45 training sets was 13.8 h and the complete inference time for each test image was 40.6 ms.

2.3.

Validation

The accuracy of our proposed linear-regression CNN lumen segmentation was validated against the gold-standard segmentation of the test data pullback acquisitions, which were the aforementioned 19 manually delineated pullbacks. These pullbacks contain in total 5685 images. The accuracy was assessed in three ways: (1) on a point-by-point basis via distance error measure, (2) in the form of binary image overlaps, and (3) based on luminal area.

The first assessment involves point-by-point analysis on the 100 equidistant radial contour points from all images, whereby the mean absolute Euclidean distance error between the gold standard and predicted contours was computed for each image.

The second assessment was performed to evaluate the regions delineated as lumen. The amount of overlap between the binary masks as generated from the predicted contours and the corresponding gold standards was computed using the Dice coefficient and Jaccard similarity index.

The third assessment targeted at the luminal area, which is one of the clinical indices to locate and grade the extent of coronary stenosis for treatment planning. Luminal area was computed from the binary mask produced from the predicted contours and compared against the corresponding gold standard. We also performed a one-tailed Wilcoxon signed ranks test on the errors of the estimated luminal areas at the significance level of 0.001. Three-dimensional (3-D) surface models of the lumen wall were also generated for all pullbacks to facilitate visual comparison of the segmentation by manual contouring and automated contouring using the proposed CNN regression model.

2.4.

Dependency of Network Performance on Training Data Quantity

To understand the dependency of the network performance to the amount of training data required, we assessed the variation in accuracy of the 19 test pullbacks against different numbers of training datasets. Tests were performed with 10, 15, 20, 25, 30, 35, 40, and 45 pullbacks. The training pullbacks for each group were selected randomly. The number of training runs with different training sets was kept constant at 400,000 runs, with a similar base learning rate and learning rate decay protocol.

2.5.

Interobserver Variability Against Convolutional Neural Networks Accuracy

To quantify the allowable variation in segmentation, we performed an experiment to assess variation in the manual gold standard that would be generated by three independent observers.

One hundred images were selected randomly from five pullbacks of the test sets and the lumen manually delineated by three independent observers. The interobserver variability was assessed through Bland–Altman analysis, consistent with Celi and Berti in their study on the segmentation of coronary lesions.11 Specifically, the signed differences among all possible corresponding pairs of luminal areas from all three observers were plotted against their mean area differences. Bland–Altman analyses were also performed on luminal areas evaluated by the CNN against the corresponding evaluation by all observers. These analyses provide an understanding of the total bias and limits of agreement (i.e., 95% confidence interval or 1.96× standard deviation of the signed differences from the mean) among all observers themselves as well as between the CNN and the observers.

3.

Results

3.1.

Dependency of Network Performance on Training Data Quantity

The results assessing the impact of training data quantity on CNN accuracy are shown in Fig. 2. The value reported here is the mean positional accuracy of each point along the vessel wall. There was notable improvement in CNN accuracy with an increase in the training data quantity up until 25 training data sets. Beyond that, the mean absolute error per image varied little with increased data. However, the optimal CNN segmentation was obtained from training with the highest sample size, i.e., 45 pullbacks consisting of 13,342 training images, as summarized in Table 2. At 45 training pullbacks, the median of the mean absolute error per image as quantified using point-by-point analysis was 21.87 microns, whereas Dice coefficient and Jaccard similarity index were calculated as 0.985 and 0.970, respectively.

Fig. 2

Mean absolute error against different numbers of training data sets.

JBO_22_12_126005_f002.png

Table 2

Accuracy of CNN segmentation with 45 training pullbacks (n=13,342). The values are obtained based on the segmentation on 19 test pullbacks (n=5685).

MeasureMedian (interquartile range)
Mean absolute error per image (point-by-point analysis), μm21.87 (16.28, 31.29)
Dice coefficient0.985 (0.979, 0.988)
Jaccard similarity index0.970 (0.958, 0.977)

Representative segmentation results are shown in Fig. 3. Apart from performing well on images with clear lumen border contrast Fig. 3(a), linear-regression CNN segmentation has shown robustness in segmenting images with inhomogenous lumen intensity (b), severe stenosis (c), blood residue due to suboptimal flushing (d)–(f), multiple reflections (g), embedded stent struts (h) and (i), malapposed metallic stent struts (j), malapposed bioresorbable stent struts (k), and minor side branches [(c), (i), and (l)]. Acceptable lumen segmentation was found at the shadow behind the guide wire and metallic stent struts across all images. Errors were observed to occur most frequently at major bifurcations (angle spanning >90  deg), where the appropriate boundary for segmenting the main vessel was ambiguous [Figs. 4(c)4(d)]. Seventy-two percent of the 100 worst performing segmentations were found to contain major bifurcations and, at these locations, overestimation of the area of the main vessels was noted.

Fig. 3

Representative results from the test sets, (a) showing good segmentation from linear-regression CNN on images with good lumen border contrast, (b) inhomogenous lumen intensity, (c) severe stenosis, (d)–(f) blood swirl due to inadequate flushing, (g) multiple reflections (indicated by yellow arrow), (h) and (i) embedded metallic and bioresorbable stent struts due to restenosis, respectively, (j) malapposed metallic stent struts, (k) malapposed bioresorbable stent strut, and (l) minor side branch. Blue and red contours represent CNN segmentation and gold standard, respectively. Scale bar (a) represents 500 microns.

JBO_22_12_126005_f003.png

Fig. 4

Representative cases from the test sets, (a) and (b) showing reasonable lumen segmentation from linear-regression CNN on images with medium-sized bifurcations. (c) and (d) Poorer results were seen at major bifurcations, where the appropriate boundary for segmenting the main vessel was ambiguous. Blue and red contours represent CNN segmentation and gold standard, respectively. Scale bar (a) represents 500 microns.

JBO_22_12_126005_f004.png

Based on the results obtained with the optimal training quantity (45 pullback data sets), we calculated luminal area estimates in all 19 test pullbacks, as tabulated in Table 3. CNN segmentation yields median (interquartile range) luminal area of 5.28 (3.88, 7.45) mm2 matching well with the results of manual segmentation of 5.26 (3.93, 7.45) mm2 (i.e., gold standard). The median (interquartile range) absolute error of luminal area was 1.38%, which is statistically significantly below 2% (p<0.001) as tested by the one-tailed Wilcoxon signed rank test. Figure 5 shows two representative examples of the 3-D reconstructed vessel wall from two different pullbacks for visual comparison of CNN regression (middle column) against gold-standard manual (left column) segmentation. The vessel wall was color-coded with the cross-sectional luminal area. Difference in luminal area between CNN regression and gold-standard segmentation is color-coded on the vessel wall on the right column.

Table 3

Luminal area in 19 test pullbacks with optimal training.

MethodMedian (interquartile range)
Luminal area (mm2)
 Manual segmentation area5.28 (3.88, 7.45)
 CNN segmentation area5.26 (3.93, 7.45)
Percentage errora (%)
 Signed percentage error0.06 (−1.24, 1.53)
 Absolute percentage errorb1.38 (0.63, 2.62)

aNormalized by manual segmentation area.

bSignificantly below 2%, p<0.001

Fig. 5

Reconstruction of vessel wall from two different pullbacks [(a) and (b)] for visual comparison of CNN regression segmentation against the gold-standard manual segmentation. Vessel walls (left and middle columns) are color-coded with cross-sectional luminal area. Difference in luminal area is displayed on the right. The axis is in mm and color bar indicates luminal area in mm2.

JBO_22_12_126005_f005.png

3.2.

Interobserver Variability Against Convolutional Neural Networks Accuracy

The Bland–Altman analysis among all three observers showed a bias (mean signed difference) of 0.0  mm2 and limits of agreement of ±0.599  mm2 in terms of luminal area estimation [Fig. 6(a)]. Comparing the CNN with all observers, the bias was 0.057  mm2 and the variability in terms of limits of agreement was comparable at ±0.665  mm2 [Fig. 6(b)]. These results suggest that automated segmentation had sub-100 micron bias to over-estimate luminal area, and that the variation between automated and manual estimates of luminal area was only slightly greater than the interobserver variability among human observers.

Fig. 6

Bland–Altman plot analysis of luminal area for all possible (a) pair-comparisons among different observers and (b) between CNN and observers for the 100 randomly selected images from the test set.

JBO_22_12_126005_f006.png

4.

Discussion

Lumen dimension is an important factor in the optimization of percutaneous coronary intervention. This measure allows the clinician to localize and measure the length of lesions along the vessel wall before making an optimum selection of stent for deployment. It also allows one to indirectly assess the quality of stenting (i.e., based on total expansion of the narrowed artery) and is the first step toward quantifying the amount of stent malapposition. Misinterpretation of lesion location and length results in both clinical and financial consequences as additional stents are required for redeployment, and overlapping of multiple stents is often associated with increased incidences of restenosis, thrombosis, and adverse clinical outcomes.33

Manually quantifying coronary lumen dimension from IVOCT images over the entire extent of the imaged segment is currently not clinically feasible in view of the number of sample images available per pullback (i.e., >100 images). Automatic segmentation is desirable but challenging due to the significant variety of image features and artifacts obtained in routine scanning, restricting the operation of most image processing algorithms to a specific subset of good quality images. Deep learning techniques have been shown to be more robust in a pool of heterogeneous input images, and this has also been demonstrated in our results.28 Our study represents the first to employ such a technique, combined with a linear regression approach, to the automatic segmentation of lumen from IVOCT images.

Our results showed a notable increase in CNN accuracy up to 25 training pullbacks, and incremental improvements thereafter. The median accuracy in luminal radius at each radial location, against a manual gold standard, was 21.87  μm at optimal training with 45 training pullbacks, which is comparable with the OCT system’s axial resolution (15  μm). The median luminal area was marginally greater by manual segmentation in comparison with CNN segmentation (i.e., 5.28 versus 5.26  mm2), yielding a median error of 1.38% (i.e., significantly <2% at p=0.001). The CNN also has good limits of agreement against all observers (±0.665  mm2), which is comparable with the limit of agreement among all observers (±0.599  mm2).

Published algorithms have required the prior removal of guide-wires or blood artifacts in the images as well as interpolation of output contours across guidewire shadow and bifurcation11,12,27,34 to complete an accurate segmentation. Our linear-regression CNN algorithm did not require additional pre- and postprocessing of the data, with the behavior across these features arising implicitly from the training data. In addition, the proposed method works on a wide spectrum of IVOCT images whether in the presence or absence of stent struts. We found this approach to be of utility in assessing patients both pre- and poststenting. Furthermore, the CNN segmentation was able to segment images regardless of stent types and no prior information on implanted type is needed, as can be required by some other segmentation techniques,35 making it applicable in a wider range of clinical settings.

We note that while training time was significant (13.8 h for 45 training pullbacks), this is all precomputed prior to clinical usage. The subsequent time to process a test image was extremely small (40.6 ms). Thus, the use of linear-regression CNNs offers the potential of intra-operative assessment of the vessel lumen during an intervention.

Limitations of the algorithm occur at areas with highly irregular lumen shapes, and at major bifurcations, where vessel lumen of the main branch is ambiguous even for manual segmentation. We note that this implementation of the algorithm has adopted a two-dimensional processing approach where each image is processed independently. Extending this to a volumetric approach, where adjacent slices influence the segmentation of each image, may result in more stable results in these situations. Alternatively, some form of energy minimization approach may be incorporated into the CNN cost function to enforce additional regularization of the lumen shape.

5.

Conclusion

This paper has demonstrated a linear-regression CNN for the segmentation of vessel lumen in IVOCT images. The algorithm was tested on clinical data and compared against a manual gold standard. Results suggested that the CNN provided accurate estimates of the lumen boundary, with errors only slightly greater than the interobserver variability among multiple human observers. In addition, the algorithm was fast, processing test images at a rate of 40.6 ms per image. Our results suggest that the linear-regression CNN-based approach has the potential to be incorporated into a clinical workflow and provide quantitative assessment of vessel lumen in an intraoperative time frame.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Acknowledgments

This research was funded by the University of Malaya Research Grant (RP028A-14HTM) and the University of Malaya Postgraduate Research Grant (PG052-2015B). Prof. McLaughlin is supported by a Premier’s Research and Industry Fund grant provided by the South Australian Government Department of State Development, and by the Australian Research Council (CE140100003 and DP150104660).

References

1. 

S. Mendis, “Global status report on noncommunicable diseases 2014,” (2014). Google Scholar

2. 

American Heart Association, “Cardiac procedures and surgeries,” (2016). Google Scholar

3. 

K. Nikolaou et al., “MRI and CT in the diagnosis of coronary artery disease: indications and applications,” Insights Imaging, 2 (1), 9 –24 (2011). http://dx.doi.org/10.1007/s13244-010-0049-0 Google Scholar

4. 

A. C. De Franco and S. E. Nissen, “Coronary intravascular ultrasound: implications for understanding the development and potential regression of atherosclerosis,” Am. J. Cardiol., 88 (10), 7 –20 (2001). http://dx.doi.org/10.1016/S0002-9149(01)02109-9 AJNCE4 0258-4425 Google Scholar

5. 

H. M. Garcìa-Garcìa et al., “IVUS-based imaging modalities for tissue characterization: similarities and differences,” Int. J. Cardiovasc. Imaging, 27 (2), 215 –224 (2011). http://dx.doi.org/10.1007/s10554-010-9789-7 Google Scholar

6. 

H. G. Bezerra et al., “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC Cardiovasc. Interventions, 2 (11), 1035 –1046 (2009). http://dx.doi.org/10.1016/j.jcin.2009.06.019 Google Scholar

7. 

D. Stamper, N. J. Weissman and M. Brezinski, “Plaque characterization with optical coherence tomography,” J. Am. Coll. Cardiol., 47 (8), C69 –C79 (2006). http://dx.doi.org/10.1016/j.jacc.2005.10.067 JACCDI 0735-1097 Google Scholar

8. 

I.-K. Jang et al., “Visualization of coronary atherosclerotic plaques in patients using optical coherence tomography: comparison with intravascular ultrasound,” J. Am. Coll. Cardiol., 39 (4), 604 –609 (2002). http://dx.doi.org/10.1016/S0735-1097(01)01799-5 JACCDI 0735-1097 Google Scholar

9. 

A. Karanasos et al., “OCT assessment of the long-term vascular healing response 5 years after everolimus-eluting bioresorbable vascular scaffold,” J. Am. Coll. Cardiol., 64 (22), 2343 –2356 (2014). http://dx.doi.org/10.1016/j.jacc.2014.09.029 JACCDI 0735-1097 Google Scholar

10. 

M. Jaguszewski and U. Landmesser, “Optical coherence tomography imaging: novel insights into the vascular response after coronary stent implantation,” Curr. Cardiovasc. Imaging Rep., 5 (4), 231 –238 (2012). http://dx.doi.org/10.1007/s12410-012-9138-4 Google Scholar

11. 

S. Celi and S. Berti, “In-vivo segmentation and quantification of coronary lesions by optical coherence tomography images for a lesion type definition and stenosis grading,” Med. Image Anal., 18 (7), 1157 –1168 (2014). http://dx.doi.org/10.1016/j.media.2014.06.011 Google Scholar

12. 

Z. Wang et al., “Semiautomatic segmentation and quantification of calcified plaques in intracoronary optical coherence tomography images,” J. Biomed. Opt., 15 (6), 061711 (2010). http://dx.doi.org/10.1117/1.3506212 JBOPFO 1083-3668 Google Scholar

13. 

T. Adriaenssens et al., “Automated detection and quantification of clusters of malapposed and uncovered intracoronary stent struts assessed with optical coherence tomography,” Int. J. Cardvasc. Imaging, 30 (5), 839 –848 (2014). http://dx.doi.org/10.1007/s10554-014-0406-z Google Scholar

14. 

G. J. Tearney et al., “Consensus standards for acquisition, measurement, and reporting of intravascular optical coherence tomography studies: a report from the international working group for intravascular optical coherence tomography standardization and validation,” J. Am. Coll. Cardiol., 59 (12), 1058 –1072 (2012). http://dx.doi.org/10.1016/j.jacc.2011.09.079 JACCDI 0735-1097 Google Scholar

15. 

K. Sihan et al., “A novel approach to quantitative analysis of intravascular optical coherence tomography imaging,” in Computers in Cardiology, 1089 –1092 (2008). http://dx.doi.org/10.1109/CIC.2008.4749235 Google Scholar

16. 

S. Gurmeric et al., “A new 3-D automated computational method to evaluate in-stent neointimal hyperplasia in in-vivo intravascular optical coherence tomography pullbacks,” Lect. Notes Comput. Sci., 5762 776 –785 (2009). http://dx.doi.org/10.1007/978-3-642-04271-3 LNCSD9 0302-9743 Google Scholar

17. 

M. C. Moraes, D. A. C. Cardenas and S. S. Furuie, “Automatic lumen segmentation in IVOCT images using binary morphological reconstruction,” Biomed. Eng. Online, 12 78 (2013). http://dx.doi.org/10.1186/1475-925X-12-78 Google Scholar

18. 

S. Tsantis et al., “Automatic vessel lumen segmentation and stent strut detection in intravascular optical coherence tomography,” Med. Phys., 39 (1), 503 –513 (2012). http://dx.doi.org/10.1118/1.3673067 MPHYA6 0094-2405 Google Scholar

19. 

A. G. Roy et al., “Lumen segmentation in intravascular optical coherence tomography using backscattering tracked and initialized random walks,” IEEE J. Biomed. Health Inf., 20 (2), 606 –614 (2016). http://dx.doi.org/10.1109/JBHI.2015.2403713 Google Scholar

20. 

C. S. Lee et al., “Deep-learning based, automated segmentation of macular edema in optical coherence tomography,” Biomed. Opt. Express, 8 (7), 3440 –3448 (2017). http://dx.doi.org/10.1364/BOE.8.003440 BOEICL 2156-7085 Google Scholar

21. 

A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems, 1097 –1105 (2012). Google Scholar

22. 

M. Havaei et al., “Brain tumor segmentation with deep neural networks,” Med. Image Anal., 35 18 –31 (2017). http://dx.doi.org/10.1016/j.media.2016.05.004 Google Scholar

23. 

Q. Li et al., “Medical image classification with convolutional neural network,” in 13th Int. Conf. on Control Automation Robotics and Vision (ICARCV), 844 –848 (2014). http://dx.doi.org/10.1109/ICARCV.2014.7064414 Google Scholar

24. 

M. Halicek et al., “Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging,” J. Biomed. Opt., 22 (6), 060503 (2017). http://dx.doi.org/10.1117/1.JBO.22.6.060503 JBOPFO 1083-3668 Google Scholar

25. 

L. Fang et al., “Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search,” Biomed. Opt. Express, 8 (5), 2732 –2744 (2017). http://dx.doi.org/10.1364/BOE.8.002732 BOEICL 2156-7085 Google Scholar

26. 

P. Prentašić et al., “Segmentation of the foveal microvasculature using deep learning networks,” J. Biomed. Opt., 21 (7), 075008 (2016). http://dx.doi.org/10.1117/1.JBO.21.7.075008 JBOPFO 1083-3668 Google Scholar

27. 

A. Abdolmanafi et al., “Deep feature learning for automatic tissue classification of coronary artery using optical coherence tomography,” Biomed. Opt. Express, 8 (2), 1203 –1220 (2017). http://dx.doi.org/10.1364/BOE.8.001203 BOEICL 2156-7085 Google Scholar

28. 

L. K. Tan et al., “Convolutional neural network regression for short-axis left ventricle segmentation in cardiac cine MR sequences,” Med. Image Anal., 39 78 –86 (2017). http://dx.doi.org/10.1016/j.media.2017.04.002 Google Scholar

29. 

J. Schindelin et al., “The ImageJ ecosystem: an open platform for biomedical image analysis,” Mol. Reprod. Dev., 82 518 –529 http://dx.doi.org/10.1002/mrd.22489 Google Scholar

30. 

D.-A. Clevert, T. Unterthiner and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUS),” (2015). Google Scholar

31. 

L. Rokach, O. Maimon, “Clustering methods,” Data Mining and Knowledge Discovery Handbook, 321 –352 Springer, Boston, Massachusetts (2005). Google Scholar

32. 

D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” (2014). Google Scholar

33. 

K. Suzuki, “Mining of training samples for multiple learning machines in computer-aided detection of lesions in CT images,” in IEEE Int. Conf. on Data Mining Workshop (ICDMW), 982 –989 (2014). http://dx.doi.org/10.1109/ICDMW.2014.111 Google Scholar

34. 

G. J. Ughi et al., “Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo,” Biomed. Opt. Express, 3 (12), 3291 –3303 (2012). http://dx.doi.org/10.1364/BOE.3.003291 BOEICL 2156-7085 Google Scholar

35. 

G. J. Ughi et al., “Automatic segmentation of in-vivo intra-coronary optical coherence tomography images to assess stent strut apposition and coverage,” Int. J. Cardvasc. Imaging, 28 (2), 229 –241 (2012). http://dx.doi.org/10.1007/s10554-011-9824-3 Google Scholar

Biography

Yan Ling Yong received his bachelor’s degree in biotechnology from Pennsylvania State University, USA. Currently, he is a postgraduate student at the University of Malaya, Malaysia, performing research in image processing.

Li Kuo Tan received his master’s degree in biomedical engineering from Monash University, Australia. He is a lecturer in the University of Malaya, Malaysia. His research interests include medical imaging and image processing.

Robert A. McLaughlin is the chair of biophotonics at the University of Adelaide, Australia. He received his PhD from the University of Western Australia and subsequently was a postdoc at the University of Oxford.

Kok Han Chee is senior consultant cardiologist in the University of Malaya, Malaysia. His research interests include atrial fibrillation and diabetic cardiomyopathy.

Yih Miin Liew received her PhD from the University of Western Australia, Perth. Currently, she is a senior lecturer in the University of Malaya, Malaysia. She is active in medical imaging and image processing research for healthcare.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
Yan Ling Yong, Li Kuo Tan, Robert A. McLaughlin, Kok Han Chee, and Yih Miin Liew "Linear-regression convolutional neural network for fully automated coronary lumen segmentation in intravascular optical coherence tomography," Journal of Biomedical Optics 22(12), 126005 (23 December 2017). https://doi.org/10.1117/1.JBO.22.12.126005
Received: 5 October 2017; Accepted: 1 December 2017; Published: 23 December 2017
Lens.org Logo
CITATIONS
Cited by 43 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Convolutional neural networks

Optical coherence tomography

Image processing

Arteries

Error analysis

Gold

Back to Top