Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery
disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction
methods have been proposed and most of them are based on shortest path computation given one or two end
points on the artery. The major variation of the shortest path based approaches is in the different vesselness
measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian
vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning
based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated
dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree
(PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score
to those outside. The detection score can be treated as a vesselness measurement in the computation of the
shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it
can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification
only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from
the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve
the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional
Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process
a large volume with a typical size of 512x512x200 voxels.
We recently proposed a robust heart chamber segmentation approach based on marginal space learning. In this paper, we focus on improving the LV endocardium segmentation accuracy by searching for an optimal smooth mesh that tightly encloses the whole blood pool. The refinement procedure is formulated as an optimization problem: maximizing the surface smoothness under the tightness constraint. The formulation is a convex quadratic programming problem, therefore has a unique global optimum and can be solved efficiently. Our approach has been validated on the largest cardiac CT dataset (457 volumes from 186 patients) ever reported. Compared to our previous work, it reduces the mean point-to-mesh error from 1.13 mm to 0.84 mm (22% improvement). Additionally, the system has been extensively tested on a dataset with 2000+ volumes without any major failure.
In CT angiography images, osseous structures occluding vessels pose difficulties for physicians during diagnosis.
Simple thresholding techniques for removing bones fail due to overlapping CT values of vessels filled with contrast
agent and osseous tissue, while manual delineation is slow and tedious. Thus, we propose to automatically
segment bones using a trainable classifier to label image patches as bone or background. The image features
provided to the classifier are based on grey value statistics and gradients. In contrast to most existing methods,
osseous tissue segmentation in our algorithm works without any prior knowledge of the body region depicted in
the image. This is achieved by using a probabilistic boosting tree, which is capable of automatically decomposing
the input space. The whole system works by partitioning the image using a watershed transform, classifying
image regions as bone or background and refining the result by means of a graph-based procedure. Additionally,
an intuitive way of manually refining the segmentation result is incorporated. The system was evaluated on 15
CTA datasets acquired from various body regions, showing an average correct recognition of bone regions of 80%
at a false positive rate of 0.025% of the background voxels.
Assessment of computed tomography coronary angiograms for diagnostic purposes is a mostly manual, timeconsuming
task demanding a high degree of clinical experience. In order to support diagnosis, a method for
reliable automatic detection of stenotic lesions in computed tomography angiograms is presented. Thereby,
lesions are detected by boosting-based classification. Hence, a strong classifier is trained using the AdaBoost
algorithm on annotated data. Subsequently, the resulting strong classification function is used in order to
detect different types of coronary lesions in previously unseen data. As pattern recognition algorithms require
a description of the objects to be classified, a novel approach for feature extraction in computed tomography
angiograms is introduced. By generation of cylinder segments that approximate the vessel shape at multiple
scales, feature values can be extracted that adequately describe the properties of stenotic lesions. As a result of
the multi-scale approach, the algorithm is capable of dealing with the variability of stenotic lesion configuration.
Evaluation of the algorithm was performed on a large database containing unseen segmented centerlines from
cardiac computed tomography images. Results showed that the method was able to detect stenotic cardiovascular
diseases with high sensitivity and specificity. Moreover, lesion based evaluation revealed that the majority of
stenosis can be reliable identified in terms of position, type and extent.
Dual-Energy CT makes it possible to separate contributions of different X-ray attenuation processes or materials in the
CT image. Thereby, standard Dual-Energy tissue classification techniques perform a so called material analysis or decomposition. The resulting material maps can then be used to perform explicit segmentation of anatomical structures such as osseous tissue in case of bone removal. As a drawback, information about tissue classes included in the scan must be known beforehand in order to choose the appropriate material analysis algorithms. We propose direct volume
rendering with bidimensional transfer functions as a tool for interactive and intuitive exploration of Dual-Energy scans.
Thereby, adequate visualization of the Dual-Energy histogram provides the basis for easily identifying different tissue classes. Transfer functions are interactively adjusted over the Dual-Energy histogram where the x- and y-axis correspond to the 80 kV and 140kV intensities respectively. GPU implementation allows precise fine-tuning of transfer functions with real time feedback in the resulting visualization. Additionally, per fragment filtering and post interpolative Dual-Energy tissue classification are provided. Moreover, interactive histogram exploration makes it possible to create adequate Dual-Energy visualizations without pre-processing or previous knowledge about existing tissue classes.
One of the most important applications of direct volume rendering is
the visualization of labeled medical data. Explicit segmentation of
embedded subvolumes allows a clear separation of neighboring
substructures in the same range of intensity values, which can then be
used for implicit segmentation of fine structures using transfer
functions. Nevertheless, the hard label boundaries of explicitly
segmented structures lead to voxelization artifacts. Pixel-resolution
linear filtering can not solve this problem effectively. In order to
render soft label boundaries for explicitly segmented objects, we have
successfully applied a smoothing algorithm based on gradients of the
volumetric label data as a preprocessing step. A 3D-texture based
rendering approach was implemented, where volume labels are
interpolated independently of each other using the graphics
hardware. Thereby, correct trilinear interpolation of four subvolumes
is obtained. Per-label post-interpolative transfer functions together
with inter-label interpolation are performed in the pixel shader stage
in a single rendering pass, hence obtaining high-quality rendering of
labeled data on GPUs. The presented technique showed its high
practical value for the 3D-visualization of tiny vessel and nerve
structures in MR data in case of neurovascular compression syndromes.
Direct volume visualization of computer tomography data is based on the mapping of data values to colors and opacities with lookup-tables known as transfer functions (TF). Often, limitations of one-dimensional TF become evident when it comes to the visualization of aneurysms close the skull base. Computer tomography angiography data is used for the 3D-representation of the vessels filled with contrast medium. The reduced intensity differences between osseous tissue and contrast medium lead to strong artifacts and ambiguous visualizations. We introduced the use of bidimensional TFs based on measured intensities and gradient magnitudes for the visualization of aneurysms involving the skull base. The obtained results are clearly superior to a standard approach with one-dimensional TFs. Nevertheless, the additional degree of freedom increases the difficulty involved in creating adequate TFs. In order to address this problem, we introduce automatic adjustment of bidimensional TFs through a registration of respective 2D histograms. Initially, a dataset is set as reference and the information contained in its 2D histogram (intensities and gradient magnitudes) is used to create a TF template which produces a clear visualization of the vessels. When a new dataset is examined, elastic registration of the reference and target 2D histograms is carried out. The resulting free form deformation is then used for the automatic adjustment of the reference TF, in order to automatically obtain a clear volume visualization of the vascular structures within the examined dataset. Results are comparable to manually created TFs. This approach makes it possible to successfully use bidimensional TFs without technical insight and training.
Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.