Open Access
20 November 2018 Segmentation of yeast cell’s bright-field image with an edge-tracing algorithm
Linbo Wang, Simin Li, Zhenglong Sun, Gang Wen, Fan Zheng, Chuanhai Fu, Hui Li
Author Affiliations +
Abstract
Phenotype analysis of yeast cell requires high-throughput imaging and automatic analysis of abundant image data. At first, each cell needs to be segmented and labeled in the bright-field images. However, the ambiguous boundary of bright-field yeast cell images leads to the failure of traditional segmentation algorithms. We propose a segmentation algorithm based on the morphological characteristics of yeast cells. Seed points are first identified along the cell contour and then connected by an edge tracing approach. In this way, “ill-detected” noise points are removed so that edges of yeast cells can be successfully extracted in bright-field images with sparsely distributed cells. In densely packed images, yeast cells with normal morphology can also be correctly segmented and labeled.

1.

Introduction

Yeast is a commonly used model organism in biology due to its simple growth requirements and genetic tractability.15 In a genetic study, phenotype study of yeast with the genetic mutations related to human genetic diseases has been a great help to improve the human disease diagnosis and treatment.610 During the studies, massive images of yeast cells are generally captured with microscopes to observe the different cell structures and behaviors under mutation or different drug treatments. However, in general, the morphological analysis of yeast cells, which mainly depends on manual measurements by researchers, is time-consuming and has personal formula. Thus an automatic and efficient segmentation algorithm of yeast cell images is required for the analysis of the yeast cell’s morphology and detailed structures.

In most studies, both bright-field imaging and fluorescent imaging of yeast cells are recorded to reflect the morphology and structural changes.11 For example, Fig. 1 shows images of yeast cells with bright-field images indicating morphology, and fluorescent images indicating its corresponding microtubules and mitochondria, respectively. Before analyzing the characteristics of mitochondria and microtubules, each cell needs to be accurately segmented and labeled from bright-field image. However, the bright-field images are much harder to segment than fluorescent images due to its low contrast between background and cell, the discontinuous boundary and the halo artifact around the cell wall.12

Fig. 1

Examples of bright-field and fluorescent yeast cell image: (a) the bright-field image of yeast cells, (b) the fluorescent image of mitochondria labeled with GFP, and (c) the fluorescent image of microtubules labeled with mCherry.

JBO_23_11_116503_f001.png

Traditional segmentation methods, including edge-based methods and threshold-based methods, fail to correctly segment bright-field cell images. Canny edge detector13 is too sensitive to local noise and halo artifact, which leads to low-quality edge maps with messy background and discontinuous edge segments.14 The complex feature of bright-field cell images also leads to tremendous segmentation errors with Otsu’s method,15 which generally requires high contrast and apparent intensity difference between background and target. The active contour model16 was more successful in cell image segmentation by providing an iterative energy-minimizing method controlled by external constraint forces and image forces. Recently, various improved algorithms based on active contour model have been developed to solve the cell image segmentation problem,17,18 but the location of the approximate cell contour was still required to be set by hand. Moreover, these methods are more suitable for fluorescent images due to its clear edges and intensity information while it locates in the halo artifact when applied to bright-field cell image.

In recent years, several segmentation algorithms specialized for bright-field cell image have been developed to address the difficult issues mentioned above. Weber and Albrecht19 developed an algorithm that includes background subtraction and binarization based on texture discrimination by means of a rank operator but requires double-view imaging, combining bright field, and reflection interference contrast video microscopy. Bradbury and Wan12 proposed a bright-field cell image segmentation method with the spectral and k-means clustering techniques, but for cells closer to the background intensity it can only find a portion of the cell wall. Zhang et al.20 reported a method for detecting and segmenting yeast cells in bright-field images, which can only detect circular yeast cells. In addition, Kang and Wan21 described a method based on the multiscale framework to segment bright-field cell images using the Bhattacharyya measure. Chen and Wan22 presented two mathematical models based on principal components pursuit. Yang et al.23 developed an image processing algorithm that can automatically extract geometrical features of yeast cells. However, these methods are mostly aimed at images containing only one single cell and can be hard to identify individual cells that adhere to another one. Furthermore, due to multiple parameter settings, over-segmentation or under-segmentation could be produced, if the parameters were not set correctly.

In this paper, we present an automatic segmentation method with an efficient edge-tracing algorithm for bright-field images of yeast cells. The algorithms first detect accurate cell contour points called seed points automatically and then connect them by a fast and robust edge tracing algorithm. In images with sparsely distributed yeast cells, 100% cells can be segmented properly. In densely packed images, cells with normal morphology can also be correctly segmented while largely distorted cells were ruled out. The bright-field segmentation results can be further used to labeling the fluorescent images.

2.

Image Acquisition

Fission yeast cells expressing GFP-cox4 (mitochondria marker) and mCherry-atb2 (tubulin marker) were grown in Edinburgh minimal medium supplemented with adenine, leucine, uracil, histidine, and lysine (0.225  g/L each) at 30°C, and the exponential phase cells were then collected and sandwiched between an EMM agarose pad and a coverslip for imaging. Bright field and fluorescent images were recorded with DeltaVision Microscope (GE Healthcare). An Olympus PlanApo N 60× 1.4 NA oil objective and a Photometrics CoolSnap HQ2 camera were used.

3.

Segmentation Algorithm

3.1.

Flowchart of the Algorithm

Figure 2 shows the flowchart of our segmentation method of bright-field yeast cell images. Histogram equalization and Gauss filtering are successively applied to enhance images contrast and weaken the noise. The edge detection step is conducted with two threshold values to produce a “cleaner” and an “intact” edge map, respectively, both of which are processed to single pixel width by a series of morphology operation. Then the initial seed points can be detected from the latter edge map according to the approximate localization of cells determined by the Hough transformation of the former edge map. After screening of the seed points to eliminate nonedge ones, a smart edge tracing algorithm is applied to connect the remaining seed points to extract an accurate, closed, and smooth cell contour.

Fig. 2

Flowchart of segmentation algorithm of bright-field yeast cell images.

JBO_23_11_116503_f002.png

3.2.

Seed Points Detection

As a preprocessing step, histogram equalization and 5×5 Gaussian kernel with σ=0.8 were first applied to original bright-field yeast cell images in order to get full gray level ranges (0 to 255) and reduce the sharp noises. As an example, preprocessing results of the raw image are shown in Fig. 3(b).

Fig. 3

Illustration of initial contour points detection: (a) raw image, (b) preprocessing results, (c) gradient map, (d1) high-threshold edge map, (e1) single-pixel edge map of (d1), and the red circle indicates the Hough transform results of (e1), (d2) low-threshold edge map, (e2) single-pixel edge map of (d2), and (f) initial seed points.

JBO_23_11_116503_f003.png

Different from classical edge detection operators, which define the first or second derivative as its gradient and makes the halo and cell boundaries have the same gradient response, a line detection mask [Fig. 4(a)] is used to compute the gradients of four directions, Gx (horizontal), Gxy (45 deg), Gy (vertical), and Gyx (45  deg), respectively. The gradient magnitude G of one pixel is then obtained by the formula G=Gx2+Gxy2+Gy2+Gyx2. The gradient magnitude map is shown in Fig. 3(c).

Fig. 4

(a) Line detection mask and (b) selection of two threshold values based on the histogram of the gradient magnitude map.

JBO_23_11_116503_f004.png

Considering the different requirements for edge-preserving degree of subsequent steps of this method, we applied two threshold values to the gradient map, the selection of which is based on the histogram of the gradient magnitude map. As shown in Fig. 4(b), the distribution of gradients is a sectional type. Here we generally take the intermediate value between the second and third sections as the low threshold, the third and fourth sections as the high one. The high threshold produces a “clean” edge map [Fig. 3(d1)] used for later cell identification by Hough transformation, whereas the low one produces cell borders map used later to extract the seed points [Fig. 3(d2)].

However, we can see in Figs. 3(d1) and 3(d2) that there are still some clustered noise edge points in the edge map, especially inside the cells and the remaining cell edges are multipixel-wide. Thus the next procedure is to eliminate most of the noise edge blob and obtain single-pixel-wide and smooth edges by means of a series of morphological operations including open operation with four-direction linear structural elements, filling small holes, and thinning the remaining edges. In our experiment, the morphological feature of the yeast is mostly sausage-shaped with both ends showing up as a circular arc. The yeast cells distorted and damaged due to the extrusion force from other cells should be discarded.11 So the arc centers of normal yeast cells can be detected by Hough transformation as is shown in Fig. 3(e1). By this way, we successfully locate inside each cell instead of losing at noise edges.

To detect the initial contour points of every yeast cell, the arc centers and low threshold images were combined by emitting rays with equal angle-interval from the arc centers, and then recording the first intersection point of every direction with the detected edges. Here, we take two types of yeast cells, e.g., individual and clustered cells, as shown in Fig. 5. The yellow arrows and red points in Fig. 5(a) represent the rays and arc centers, respectively. In Fig. 5(b), the initial seed points are highlighted in blue and connected to closed boundary marked in red using linear interpolation. The distances between these intersection points and the center are plotted and fitted by a nonlinear-least-squares fitting procedure based on a higher order sine model (here four orders are enough) in Fig. 5(c). Note that if there are no discontinuous edges, the fitting curve of distance sequence is supposed to be smooth continuous and has only one peak point standing for the other end of the cell. But the actual distance sequence always shows a complex and discontinuous curve with more than one peak, which stands for the noise points. Therefore, we first assume the point with maximum peak value to be on the other end of cell boundary and the long axis of the cell is naturally obtained. Then when the offset value between the distance value and fitting value is bigger than a given threshold, this point would be taken away from the seed points sequence. Algorithm 1 shows our method to detect the seed points by fitting a higher order sine curve.

Fig. 5

Results of the seed points detection including the initial seed points detection and screening of them by a fitting curve to get the final seed points. Here we show two types of cells with different complexities: (a) cell edges, (b) initial seed points, (c) fitting curve of distances, and (d) final seed points.

JBO_23_11_116503_f005.png

Algorithm 1

Algorithm to detect seed points.

Input: Intersection points sequence H, distance sequence D, threshold Thresh
Output: Seed points sequence P
function SeedpointsDetect (H,D,Thresh)
 Fit a higher order since curve S with D
n1
fori=1 to length(H)
  ifabs(S[i]D[i])Thresh
   P[n]H[i]
   nn+1
  end if
end for
returnP
end function

3.3.

Edge Tracing by Connecting the Seed Points

The goal of our algorithm is to find a closed, smooth, and accurate boundary for yeast cells in bright-field images. The seed points detected above cannot stand for the complete cell boundary, but they indicate the extension direction of cell contours. We build on ideas from Ref. 14, which imitates children’s dot-to-dot boundary completion games and proposes an efficient edge tracing method to connect these seed points along the real edge contour. To ensure the right direction of cell edge growing and skip the wrong edge points, we utilize the principle that the growing direction of one edge point is collectively controlled by the gradient values of its neighborhood and tangent orientation of defined cell boundary.

The routing procedure starts from the edge points detected by Hough transformation and regards this edge as initial cell boundary. At the very beginning, the distance between the current edge point and the next seed points is calculated. If the value is too small, these two points will be connected directly in straight line and the points on this line will be marked as cell contour points. Otherwise, the tangent direction of the current edge point is estimated to select the candidates. As seen in Fig. 6, the tangent orientation is classified as four directions and the red arrows indicate the edge points being considered, among which the one with the maximum gradient would be chosen to be the next edge point. Since the tracing strategy of each tangent direction is the same, here we show our method to proceed left of an edge point when its tangent direction is 0 deg as an example in Algorithm 2.

Fig. 6

Illustration of the routing procedure of four directions. (a), 0 deg, (b) 45 deg, (c) 90 deg, and (d) 45  deg.

JBO_23_11_116503_f006.png

Algorithm 2

Algorithm to proceed left of an edge point when its tangent direction is 0 deg.

Input: Current edge point [x,y], next seed point S, gradient map G, edge map E, threshold Thresh
Output: Next edge point P
function EdgeTracing ([x,y],S,G,E,Thresh)
E[x,y]EDGE
 Compute the distance d between [x,y] and S
ifdThresh
  [x,y]S
  PSEDGE
else ifG[x1,y1]>G[x1,y] and G[x1,y1]>G[x1,y+1]
    xx1; yy1;
else ifG[x1,y]>G[x1,y1] and G[x1,y]>G[x1,y+1]
    xx1; yy;
else
  xx1; yy+1;
end if
  P[x,y]
returnP
end function

4.

Results

We apply the proposed segmentation algorithm to yeast cells’ bright-field images with different densities. The original image size of our bright-field images of yeast cells is 512×512  pixels. All tests and results are obtained and computed on a PC using MATLAB.

Figure 7(a) shows the segmentation results of the proposed algorithm in Fig. 1(a), in which the yeast cells are sparsely distributed. Results show that the new method is able to automatically find all the recognizable yeast cells and accurately locate both single cells [Figs. 7(a1)7(a4)] and clustered cells [Fig. 7(a5)] at their actual boundaries. When comparing Fig. 7(a5) with Fig. 5(b), we find that the seed points give a good guide to the last step of the proposed segmentation algorithm and even can successfully separate the adherent cells. Then the corresponding mitochondria and microtubules [Figs. 7(b) and 7(c)] in fluorescent images are located according to the cell location and can be used to a statistical analysis of the characteristic we need.

Fig. 7

Segmentation results of bright-field images with sparse cells. For illustration purpose, we show the segmentation of each cell (a1)–(a5) from (a). The corresponding mitochondria and microtubules are shown in (b) and (c).

JBO_23_11_116503_f007.png

The algorithm was also applied to the bright-field images with densely crowded yeast cells, some of which are even extruded with deformation. As seen in Fig. 8(a), most of the cells can be recognized and located accurately in a crowded environment. Only very few cells with extremely complex intensity information and deformation fail to be detected for lack of arc feature. Note that among the recognizable cells in Fig. 8(a), there is still some trouble just as A and B as indicated in Fig. 8(a), largely caused by the abnormal fluctuation of the image gradient. We will discuss and address this problem in our future work.

Fig. 8

(a) Segmentation results of bright-field images with dense cells are shown, but still have some trouble (A and B marked with red circle). (b) and (c) The corresponding mitochondria and microtubules are shown, respectively.

JBO_23_11_116503_f008.png

5.

Conclusions

This paper presents an automatic and robust segmentation algorithm of yeast cell bright-field images. A series of basic image processing methods are used to detect the one-pixel-wide edges of cells and the seed points of cell borders without complex mathematics theory and parameters setting. Then an edge tracing method is proposed to connect the seed points along the true cell boundary. The method overcomes the three complex issues of bright-field cell images and successfully located all normal shape cells in the image. In addition, our method can handle other bright-field cell images, in which the boundary of the cell has a circular arc.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

This work was supported by the Natural Science Foundation of China (NSFC) (Grant Nos. 61475185 and 11504409), Natural Science Foundation of Jiangsu Province (Grant No. BK20150357), and the National Key Research and Development Program of China (No. YFC20170110100).

References

1. 

A. T. Lörincz and S. I. Reed, “Primary structure homology between the product of yeast cell division control gene CDC28 and vertebrate oncogenes,” Nature, 307 (5947), 183 –185 (1984). https://doi.org/10.1038/307183a0 Google Scholar

2. 

G. Rustici et al., “Periodic gene expression program of the fission yeast cell cycle,” Nat. Genet., 36 (8), 809 –817 (2004). https://doi.org/10.1038/ng1377 NGENEC 1061-4036 Google Scholar

3. 

V. R. Iyer et al., “Genomic binding sites of the yeast cell-cycle transcription factors SBF and MBF,” Nature, 409 (6819), 533 –538 (2001). https://doi.org/10.1038/35054095 Google Scholar

4. 

I. Simon et al., “Serial regulation of transcriptional regulators in the yeast cell cycle,” Cell, 106 (6), 697 –708 (2001). https://doi.org/10.1016/S0092-8674(01)00494-9 CELLB5 0092-8674 Google Scholar

5. 

U. S. Jung and D. E. Levin, “Genome-wide analysis of gene expression regulated by the yeast cell wall integrity signalling pathway,” Mol. Microbiol., 34 (5), 1049 –1057 (1999). https://doi.org/10.1046/j.1365-2958.1999.01667.x MOMIEE 0950-382X Google Scholar

6. 

S. Chowdhury, K. W. Smith and M. C. Gustin, “Osmotic stress and the yeast cytoskeleton: phenotype-specific suppression of an actin mutation,” J. Cell Biol., 118 (3), 561 –571 (1992). https://doi.org/10.1083/jcb.118.3.561 JCLBA3 0021-9525 Google Scholar

7. 

E. Mutoh et al., “Inducible expression of a gene encoding an L41 ribosomal protein responsible for the cycloheximide-resistant phenotype in the yeast Candida maltose,” J. Bacteriol., 177 (18), 5383 –5386 (1995). https://doi.org/10.1128/jb.177.18.5383-5386.1995 JOBAAY 0021-9193 Google Scholar

8. 

N. Marcoux et al., “Overexpression of MID2 suppress the profilin-deficient phenotype of yeast cells,” Mol. Microbiol., 29 (2), 515 –526 (1998). https://doi.org/10.1046/j.1365-2958.1998.00944.x Google Scholar

9. 

J. Liu and J. V. Fons, “Classification of yeast cells from image features to evaluate pathogen conditions,” Proc. SPIE, 6506 65060I (2007). https://doi.org/10.1117/12.714072 PSISDG 0277-786X Google Scholar

10. 

P. Gourley et al., “Reactive biomolecular divergence in genetically altered yeast cells and isolated mitochondria as measured by biocavity laser spectroscopy: rapid diagnostic method for studying cellular responses to stress and disease,” J. Biomed. Opt., 12 (5), 054003 (2007). https://doi.org/10.1117/1.2799198 JBOPFO 1083-3668 Google Scholar

11. 

I. B. Dimock and J. W. L. Wan, “Cellular image segmentation using n-agent cooperative game theory,” Proc. SPIE, 9784 97842T (2016). https://doi.org/10.1117/12.2216571 PSISDG 0277-786X Google Scholar

12. 

L. Bradbury and J. W. L. Wan, “A spectral k-means approach to bright-field cell image segmentation,” in Annual Int. Conf. of the IEEE Engineering in Medicine and Biology, 4748 –4751 (2010). https://doi.org/10.1109/IEMBS.2010.5626380 Google Scholar

13. 

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8 (6), 679 –698 (1986). https://doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar

14. 

C. Topal and C. Akinlar, “Edge drawing: a combined real-time edge and segment detector,” J. Visual Commun. Image Represent., 23 (6), 862 –872 (2012). https://doi.org/10.1016/j.jvcir.2012.05.004 JVCRE7 1047-3203 Google Scholar

15. 

N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern., 9 (1), 62 –66 (1979). https://doi.org/10.1109/TSMC.1979.4310076 Google Scholar

16. 

M. Kass, A. Witkin and D. Terzopoulos, “Snakes: active contour models,” Int. J. Comput. Vision, 1 (4), 321 –331 (1988). https://doi.org/10.1007/BF00133570 IJCVEQ 0920-5691 Google Scholar

17. 

O. Dzyubachyk et al., “Advanced level-set-based cell tracking in time-lapse fluorescence microscopy,” IEEE Trans. Med. Imaging, 29 (3), 852 –867 (2010). https://doi.org/10.1109/TMI.2009.2038693 ITMID4 0278-0062 Google Scholar

18. 

A. Korzynska et al., “Segmentation of microscope images of living cells,” Pattern Anal. Appl., 10 (4), 301 –319 (2007). https://doi.org/10.1007/s10044-007-0069-7 Google Scholar

19. 

I. Weber and R. Albrecht, “Image processing for combined bright-field and reflection interference contrast video microscopy,” Comput. Meth. Programs Biomed., 53 (2), 113 –118 (1997). https://doi.org/10.1016/S0169-2607(97)01810-5 Google Scholar

20. 

C. Zhang et al., “Yeast cell detection and segmentation in bright field microscopy,” in IEEE 11th Int. Symp. on Biomedical Imaging, 1267 –1270 (2014). https://doi.org/10.1109/ISBI.2014.6868107 Google Scholar

21. 

S. M. Kang and J. W. L. Wan, “A multiscale graph cut approach to bright-field multiple cell image segmentation using a Bhattacharyya measure,” Proc. SPIE, 8669 86693S (2013). https://doi.org/10.1117/12.2007002 PSISDG 0277-786X Google Scholar

22. 

Y. Chen and J. W. L. Wan, “Bright-field cell image segmentation by principal component pursuit with a Ncut penalization,” Proc. SPIE, 9413 94133F (2015). https://doi.org/10.1117/12.2081637 PSISDG 0277-786X Google Scholar

23. 

Y. B. Yang et al., “Image processing and classification algorithm for yeast cell morphology in a microfluidic chip,” J. Biomed. Opt., 16 (6), 066008 (2011). https://doi.org/10.1117/1.3589100 JBOPFO 1083-3668 Google Scholar

Biography

Linbo Wang received her BE degree in information security from the University of Science and Technology of China in 2012 and her ME degree in mechanical engineering from the University of Chinese Academy of Sciences in 2015. She is an assistant researcher at Suzhou Institute of Biomedical Engineering and Technology (SIBET), Chinese Academy of Sciences. Her current research interests include image processing and structured illumination microscopy.

Biographies for the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Linbo Wang, Simin Li, Zhenglong Sun, Gang Wen, Fan Zheng, Chuanhai Fu, and Hui Li "Segmentation of yeast cell’s bright-field image with an edge-tracing algorithm," Journal of Biomedical Optics 23(11), 116503 (20 November 2018). https://doi.org/10.1117/1.JBO.23.11.116503
Received: 5 July 2018; Accepted: 17 October 2018; Published: 20 November 2018
Lens.org Logo
CITATIONS
Cited by 9 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image segmentation

Yeast

Image processing algorithms and systems

Algorithm development

Lithium

Detection and tracking algorithms

Edge detection

RELATED CONTENT


Back to Top