Metal artifacts are very common in CT scans since many patients have metal insertion or replacement to enhance functionality or mechanism of their bodies. These streaking artifacts could degrade CT image quality severely, and consequently, they could influence clinical diagnosis. In this paper, we propose to use the Fourier coefficients of a metal artifact-tainted image as the input to a convolutional neural network, and the Fourier coefficients of the corresponding clean image as target. We compare the performances of three convolutional neural network models with three kinds of inputs - sinograms with metal traces, images with streaks, and the Fourier coefficients of artifact-corrupted images. Using Fourier coefficients as inputs gives generally better artifacts reduction results in visualization and quantitative measures in different models.
KEYWORDS: Image segmentation, 3D image processing, Image processing algorithms and systems, Visualization, Detection and tracking algorithms, Video, 3D modeling, Data modeling, Computer simulations, Error analysis
Understanding the behavior of cells is an important problem for biologists. Significant research has been done to facilitate this by automating the segmentation of microscopic cellular images. Bright-field images of cells prove to be particularly difficult to segment, due to features such as low contrast, missing boundaries, and broken halos. We present two algorithms for automated segmentation of cellular images. These algorithms are based on a graph-partitioning approach, where each pixel is modeled as a node of a weighted graph. The method combines an effective region force with the Laplacian and total variation boundary forces, respectively, to give the two models. This region force can be interpreted as a conditional probability of a pixel belonging to a certain class (cell or background) given a small set of already labeled pixels. For practicality, we use a small set of only background pixels from the border of cell images as the labeled set. Both algorithms are tested on bright-field images to give good results. Due to faster performance, the Laplacian-based algorithm is also tested on a variety of other datasets, including fluorescent images, phase-contrast images, and 2-D and 3-D simulated images. The results show that the algorithm performs well and consistently across a range of various cell image features, such as the cell shape, size, contrast, and noise levels.
KEYWORDS: Image segmentation, Image processing algorithms and systems, 3D image processing, Detection and tracking algorithms, Image processing, Medical image processing
Understanding the behaviour of cells is an important problem for biologists. Significant research has been done to facilitate this by automating the segmentation of microscopic cellular images. Bright-field images of cells prove to be particularly difficult to segment due to features such as low contrast, missing boundaries and broken halos. In this paper, we present two algorithms for automated segmentation of cellular images. These algorithms are based on a graph-partitioning approach where each pixel is modelled as a node of a weighted graph. The method combines an effective Region Force with the Laplacian and the Total Variation boundary forces, respectively, to give the two models. This region force can be interpreted as a conditional probability of a pixel belonging to a certain class (cell or background) given a small set of already labelled pixels. For practicality, we use a small set of only background pixels from the border of cell images as the labelled set. Both algorithms are tested on bright-field images to give good results. Due to faster performance, the Laplacian-based algorithm is also tested on a variety of other datasets including fluorescent images, phase-contrast images and 2- and 3-D simulated images. The results show that the algorithm performs well and consistently across a range of various cell image features such as the cell shape, size, contrast and noise levels.
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
Segmentation of cells in time-lapse bright-field microscopic images is crucial in understanding cell behaviours for oncological research. However, the complex nature of the cells makes it difficult to segment cells accurately. Furthermore, poor contrast, broken cell boundaries and the halo artifact pose additional challenges to this problem. Standard segmentation techniques such as edged-based methods, watershed, or active contours result in poor segmentation. Other existing methods for bright-field images cannot provide good results without localized segmentation steps. In this paper, we present two robust mathematical models to segment bright-field cells automatically for the entire image. These models treat cell image segmentation as a background subtraction problem, which can be formulated as a Principal Component Pursuit (PCP) problem. Our first segmentation model is formulated as a PCP with nonnegative constraints. We exploit the sparse component of the PCP solution for identifying the cell pixels. However, there is no control on the quality of the sparse component and the nonzero entries can scatter all over the image, resulting in a noisy segmentation. The second model is an improvement of the first model by combining PCP with spectral clustering. Seemingly unrelated approaches, we combine the two techniques by incorporating normalized-cut in the PCP as a measure for the quality of the segmentation. These two models have been applied to a set of C2C12 cells obtained from bright-field microscopy. Experimental results demonstrate that the proposed models are effective in segmenting cells from bright-field images.
Automatic segmentation of bright-field cell images is important to cell biologists, but is difficult to achieve due
to the complex nature of the cells in bright-field images (poor contrast, broken halo, missing boundaries). The
standard segmentation techniques, such as the level set method and active contours, are not able to overcome
these features of bright-field images. Consequently, poor segmentation results are produced. In this paper,
we present a robust segmentation method, which combines the techniques of graph cut, multiresolution, and
Bhattacharyya measure, performed in a multiscale framework, to locate multiple cells in bright-field images.
The issue of low contrast in bright-field images is addressed by determining the difference in intensity profiles of
the cells and the background. The resulting segmentation on the entire image frame provides global information.
Then a local segmentation at different regions of interest is performed to obtain finer details of the segmentation
result. We illustrate the effectiveness of the method by presenting the segmentation results of C2C12 (muscle)
cells in bright-field images.
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of
segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the
images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths.
In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this
model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear
in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the
Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the
mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison
on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the
cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together
form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation
model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps,
eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images
to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
Segmentation of brightfield cell images from microscopy is challenging in several ways. The contrast between
cells and the background is low. Cells are usually surrounded by "halo", an optical artifact common in brightfield
images. Also, cell divisions occur frequently, which raises the issue of topological change to segmentation. In
this paper, we present a robust segmentation method based on the watershed and level set methods. Instead of
heuristically locate where the initial markers for watershed should be, we apply a multiphase level set marker
extraction to determine regions inside a cell. In contrast with the standard level set segmentation where only one
level set function is used, we apply multiple level set functions (usually 3) to capture the different intensity levels
in a cell image. This is particularly important to be able to distinguish regions of similar but different intensity
levels in low contrast images. All the pixels obtained will be used as an initial marker for watershed. The region
growing process of watershed will capture the rest of the cell until it hits the halo which serves as a "wall" to
stop the expansion. By using these relatively large number of points as markers together with watershed, we
show that the low contrast cell boundary can be captured correctly. Furthermore, we present a technique for
watershed and level set to detect cell division automatically with no special human attention. Finally, we present
segmentation results of C2C12 cells in brightfield images to illustrate the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.