This paper addresses the localization of anatomical structures in medical images by a Generalized Hough Transform (GHT). As localization is often a pre-requisite for subsequent model-based segmentation, it is important to assess whether or not the GHT was able to locate the desired object. The GHT by its construction does not make this distinction. We present an approach to detect incorrect GHT localizations by deriving collective features of contributing GHT model points and by training a Support Vector Machine (SVM) classifier. On a training set of 204 cases, we demonstrate that for the detection of incorrect localizations classification errors of down to 3% are achievable. This is three times less than the observed intrinsic GHT localization error.
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities. Many algorithms exist to segment individual organs or organ systems. However, new clinical applications and
the progress in imaging technology will require the segmentation of more and more complex organ systems composed of a
number of substructures, e.g., the heart, the trachea, and the esophagus. The goal of this work is to demonstrate that such
complex organ systems can be successfully segmented by integrating the individual organs into a general model-based
segmentation framework, without tailoring the core adaptation engine to the individual organs. As an example, we address
the fully automatic segmentation of the trachea (around its main bifurcation, including the proximal part of the two main
bronchi) and the esophagus in addition to the heart with all chambers and attached major vessels. To this end, we integrate
the trachea and the esophagus into a model-based cardiac segmentation framework. Specifically, in a first parametric
adaptation step of the segmentation workflow, the trachea and the esophagus share global model transformations with
adjacent heart structures. This allows to obtain a robust, approximate segmentation for the trachea even if it is only partly
inside the field-of-view, and for the esophagus in spite of limited contrast. The segmentation is then refined in a subsequent
deformable adaptation step. We obtained a mean segmentation error of about 0.6mm for the trachea and 2.3mm for the
esophagus on a database of 23 volumetric cardiovascular CT images. Furthermore, we show by quantitative evaluation
that our integrated framework outperforms individual esophagus segmentation, and individual trachea segmentation if the
trachea is only partly inside the field-of-view.
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities, e.g., computed tomography (CT), magnetic resonance (MR) and rotational X-ray volume imaging. While many
segmentation approaches exist, most of them are developed for a single, specific imaging modality and a single organ. In
clinical practice, however, it is becoming increasingly important to handle multiple modalities: First due to a case-specific
choice of the most suitable imaging modality (e.g. CT versus MR), and second in order to integrate complementary data
from multiple modalities. In this paper, we present a single, integrated segmentation framework which can easily be
adapted to a range of imaging modalities and organs. Our algorithm is based on shape-constrained deformable models. Key
elements are (1) a shape model representing the geometry and variability of the target organ of interest, (2) spatially varying
boundary detection functions representing the gray value appearance of the organ boundaries for the specific imaging
modality or protocol, and (3) a multi-stage segmentation approach. Focussing on fully automatic heart segmentation, we
present evaluation results for CT,MR (contrast enhanced and non-contrasted), and rotational X-ray angiography (3-D RA).
We achieved a mean segmentation error of about 0.8mm for CT and (non-contrasted) MR, 1.0mm for contrast-enhanced
MR and 1.3mm for 3-D RA, demonstrating the success of our segmentation framework across modalities.
Segmentation of organs in medical images can be successfully performed with shape-constrained deformable
models. A surface mesh is attracted to detected image boundaries by an external energy, while an internal
energy keeps the mesh similar to expected shapes. Complex organs like the heart with its four chambers can be
automatically segmented using a suitable shape variablility model based on piecewise affine degrees of freedom.
In this paper, we extend the approach to also segment highly variable vascular structures. We introduce a
dedicated framework to adapt an extended mesh model to freely bending vessels. This is achieved by subdividing
each vessel into (short) tube-shaped segments ("tubelets"). These are assigned to individual similarity transformations
for local orientation and scaling. Proper adaptation is achieved by progressively adapting distal vessel
parts to the image only after proximal neighbor tubelets have already converged. In addition, each newly activated
tubelet inherits the local orientation and scale of the preceeding one. To arrive at a joint segmentation of
chambers and vasculature, we extended a previous model comprising endocardial surfaces of the four chambers,
the left ventricular epicardium, and a pulmonary artery trunk. Newly added are the aorta (ascending and descending
plus arch), superior and inferior vena cava, coronary sinus, and four pulmonary veins. These vessels are
organized as stacks of triangulated rings. This mesh configuration is most suitable to define tubelet segments.
On 36 CT data sets reconstructed at several cardiac phases from 17 patients, segmentation accuracies of
0.61-0.80mm are obtained for the cardiac chambers. For the visible parts of the newly added great vessels,
surface accuracies of 0.47-1.17mm are obtained (larger errors are asscociated with faintly contrasted venous
structures).
Segmentation of organs in medical images can be successfully performed with deformable models. Most approaches
combine a boundary detection step with some smoothness or shape constraint. An objective function
for the model deformation is thus established from two terms: the first one attracts the surface model to the
detected boundaries while the second one keeps the surface smooth or close to expected shapes.
In this work, we assign locally varying boundary detection functions to all parts of the surface model. These
functions combine an edge detector with local image analysis in order to accept or reject possible edge candidates.
The goal is to optimize the discrimination between the wanted and misleading boundaries. We present a method
to automatically learn from a representative set of 3D training images which features are optimal at each position
of the surface model. The basic idea is to simulate the boundary detection for the given 3D images and to select
those features that minimize the distance between the detected position and the desired object boundary.
The approach is experimentally evaluated for the complex task of full-heart segmentation in CT images. A
cyclic cross-evaluation on 25 cardiac CT images shows that the optimized feature training and selection enables
robust, fully automatic heart segmentation with a mean error well below 1 mm. Comparing this approach to
simpler training schemes that use the same basic formalism to accept or reject edges shows the importance of
the discriminative optimization.
Deformable models have already been successfully applied to the semi-automatic segmentation of organs from
medical images. We present an approach which enables the fully automatic segmentation of the heart from multi-slice
computed tomography images. Compared to other approaches, we address the complete segmentation chain
comprising both model initialization and adaptation.
A multi-compartment mesh describing both atria, both ventricles, the myocardium around the left ventricle
and the trunks of the great vessels is adapted to an image volume. The adaptation is performed in a coarse-to-
fine manner by progressively relaxing constraints on the degrees of freedom of the allowed deformations. First,
the mesh is translated to a rough estimate of the heart's center of mass. Then, the mesh is deformed under the
action of image forces. We first constrain the space of deformations to parametric transformations, compensating
for global misalignment of the model chambers. Finally, a deformable adaptation is performed to account for
more local and subtle variations of the patient's anatomy.
The whole heart segmentation was quantitatively evaluated on 25 volume images and qualitatively validated
on 42 clinical cases. Our approach was found to work fully automatically in 90% of cases with a mean surface-
to-surface error clearly below 1.0 mm. Qualitatively, expert reviewers rated the overall segmentation quality as
4.2±0.7 on a 5-point scale.
An efficient way to improve the robustness of the segmentation of medical images with deformable models is to use a priori shape knowledge during the adaptation process. In this work, we investigate how the modeling of the shape variability in shape-constrained deformable models influences both the robustness and the accuracy of the segmentation of cardiac multi-slice CT images. Experiments are performed for a complex heart model, which comprises 7 anatomical parts, namely the four chambers, the myocardium, and trunks of the aorta and the pulmonary artery. In particular, we compare a common shape variability modeling technique based on principal component analysis (PCA) with a more simple approach, which consists of assigning an individual affine transformation to each anatomical subregion of the heart model. We conclude that the piecewise affine modeling leads to the smallest segmentation error, while simultaneously offering the largest flexibility without the need for training data covering the range of possible shape variability, as required by PCA.
An automatic procedure for detecting and segmenting anatomical objects in 3-D images is necessary for achieving a high level of automation in many medical applications. Since today's segmentation techniques typically rely on user input for initialization, they do not allow for a fully automatic workflow. In this work, the generalized Hough transform is used for detecting anatomical objects with well defined shape in 3-D medical images. This well-known technique has frequently been used for object detection in 2-D images and is known to be robust and reliable. However, its computational and memory requirements are generally huge, especially in case of considering 3-D images and various free transformation parameters. Our approach limits the complexity of the generalized Hough transform to a reasonable amount by (1) using object prior knowledge during the preprocessing in order to suppress unlikely regions in the image, (2) restricting the flexibility of the applied transformation to only scaling and translation, and (3) using a simple shape model which does not cover any inter-individual shape variability. Despite these limitations, the approach is demonstrated to allow for a coarse 3-D delineation of the femur, vertebra and heart in a number of experiments. Additionally it is shown that the quality of the object localization is in nearly all cases sufficient to initialize a successful segmentation using shape constrained deformable models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.