The beam-hardening effect is one of the most important factors of metal artifact that degrades CT image quality. In the polychromatic X-ray, this occurs noticeably when scanning metallic materials with large changes in energy-dependent attenuation coefficient. This violates the assumption of a CT reconstruction based on a fixed attenuation coefficient in a monochromatic X-ray, which leads to beam-hardening artifacts such as streaking and cupping shapes. Numerous studies have been researched to reduce the beam-hardening artifacts. Most of the methods need the optimization based on iterative reconstruction, which causes a time-consuming problem. This study aims at an efficient methodology in terms of performance time while providing acceptable correction of beam-hardening artifacts. For this, the attenuation coefficient error due to beam hardening is modeled with respect to the length of the X-ray passing through the metallic material. And the model is approximated by a linear combination of four basis functions determined by the length. The linearity is also preserved in the reconstruction image, so that the coefficient of each basis function can be obtained by solving the minimization problem of the variance of the homogeneous metal region in the image. For the evaluation, a phantom including three titanium rods was scanned by a cone-beam CT system (Ray, South Korea) and the images were reconstructed by the standard FDK algorithm. The results showed that the proposed method is superior in terms of speed while delivering acceptable beam-hardening correction compared to recent methods. The proposed model will be effective for the applications where processing speed is important for the beam-hardening correction.
Dental segmentation plays an important role in prosthetic dentistry such as crowns, implants and even orthodontics. Since people have different dental structures, it is hard to make a general dental segmentation model. Recently, there are only a few studies which try to tackle this problem. In this paper, we propose simple and intuitive algorithms for harmonic field based dental segmentation method to provide robustness for clinical dental mesh data. Our model includes additional grounds to gum, a pair of different Dirichlet boundary conditions, and convex segmentation for post-processing. Our data is generated for clinical usage and therefore has many noise, holes, and crowns. Moreover, some meshes have abraded teeth which deter the performance of harmonic field due to its dramatic gradient change. To the best of our knowledge, the proposed method and experiments are the first that deals with real clinical data containing noise and fragmented areas. We evaluate the results qualitatively and quantitatively to demonstrate the performance of the model. The model separates teeth from gum and other teeth very accurately. We use intersection over union (IoU) to calculate the overlap ratio between tooth. Moreover, human evaluation is used for measuring and comparing the performance of our segmentation model to other models. We compare the segmentation results of a baseline model and our model. Ablation study shows that our model improves the segmentation performance. Our model outperforms the baseline model at the expanse of some overlap which can be ignored.
In this paper, we propose an algorithm for reliable segmentation of the lung at HRCT of DILD. Our method consists of
four main steps. First, the airway and colon are segmented and excluded by thresholding(-974 HU) and connected
component analysis. Second, initial lung is identified by thresholding(-474 HU). Third, shape propagation outward the
lung is performed on the initial lung. Actual lung boundaries exist inside the propagated boundaries. Finally, subsequent
shape modeling level-set inward the lung from the propagated boundary can identify the lung boundary when the
curvature term was highly weighted. To assess the accuracy of the proposed algorithm, the segmentation results of 54
patients are compared with those of manual segmentation done by an expert radiologist. The value of 1 minus volumetric
overlap is less than 5% error. Accurate result of our method would be useful in determining the lung parenchyma at
HRCT, which is the essential step for the automatic classification and quantification of diffuse interstitial lung disease.
Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context
of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an
enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of
iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local
minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The
curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally,
rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and
compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and
volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume
error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than
the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification
for liver transplantations.
This paper presents an efficient graphics hardware-based method to segment and visualize level-set surfaces as
interactive rates. Our method is composed of page manager, level-set solver, and volume renderer. The page manager
which performs in CPU generates page table, inverse page table and available page stack as well as processes the
activation and inactivation of pages. The level-set solver computes only voxels near the iso-surface. To run efficiently
on GPUs, volume is decomposed into a set of small pages. Only those pages with non-zero derivatives are stored on
GPU. These active pages are packed into a large 2D texture memory. The level-set partial differential equation (PDE) is
computed directly on this packed format. The page manager is used to help managing the packing of the active data.
The volume renderer performs volume rendering of the original data simultaneously with the evolving level set in GPU.
Experimental results using two chest CT datasets show that our graphics hardware-based level-set method is much
faster than software-based one.
We propose a fast and robust registration method for matching lung nodules of temporal chest CT scans. Our method is
composed of four stages. First, the lungs are extracted from chest CT scans by the automatic segmentation method.
Second, the gross translational mismatch is corrected by the optimal cube registration. This initial registration does not
require extracting any anatomical landmarks. Third, initial alignment is step by step refined by the iterative surface
registration. To evaluate the distance measure between surface boundary points, a 3D distance map is generated by the
narrow-band distance propagation, which drives fast and robust convergence to the optimal location. Fourth, nodule
correspondences are established by the pairs with the smallest Euclidean distances. The results of pulmonary nodule
alignment of twenty patients are reported on a per-center-of mass point basis using the average Euclidean distance
(AED) error between corresponding nodules of initial and follow-up scans. The average AED error of twenty patients is
significantly reduced to 4.7mm from 30.0mm by our registration. Experimental results show that our registration
method aligns the lung nodules much faster than the conventional ones using a distance measure. Accurate and fast
result of our method would be more useful for the radiologist's evaluation of pulmonary nodules on chest CT scans.
To investigate changes of pulmonary nodules in temporal chest CT scans, we propose a novel technique for segmentation and registration of lungs. Our method is composed of the following steps. First, automatic segmentation is used to identify lungs in chest CT scans. Second, optimal cube registration is performed to correct gross translational mismatch of lungs. This initial registration does not require any anatomical landmarks. Third, a 3D distance map is generated by the narrow-band distance propagation, which drives fast and robust convergence to the optimum value. Fourth, the distance measure between surface boundary points is evaluated repeatedly by the selective distance measure (SDM). Then the final geometrical transformations are applied to ten pairs of successive chest CT scans. Fifth, nodule correspondences are established by the pairs with the smallest Euclidean distances. The performance of our method was evaluated with the aspects of visual inspection and accuracy. The positional differences between lungs of initial and follow-up CT scans were much reduced by the optimal cube registration. Then this initial alignment was refined by the subsequent iterative surface registration. For accuracy assessment, we have evaluated a root-mean-square (RMS) error between corresponding nodules on a per-center basis. The reduction of RMS error was obtained with the optimal cube registration, subsequent iterative surface registration and nodule registration. Experimental results show that our segmentation and registration method extracts accurate lungs and aligns them much faster than the conventional ones using a distance measure. Accurate and fast result of our method would be more useful for the radiologist’s evaluation of pulmonary nodules on chest CT scans.
Perfusion CT has been successfully used as a functional imaging technique for diagnosis of patients with hyperacute stroke. However, the commonly used methods based on curve-fitting are time consuming. Numerous researchers have investigated to what extent Perfusion CT can be used for the quantitative assessment of cerebral ischemia and to rapidly obtain comprehensive information regarding the extent of ischemic damage in acute stroke patients. The aim of this study is to propose an alternative approach to rapidly obtain the brain perfusion mapping and to show the proposed cerebral flow imaging of the vessel and tissue in human brain be reliable and useful. Our main design concern was algorithmic speed, robustness and automation in order to allow its potential use in the emergency situation of acute stroke. To obtain a more effective mapping, we analyzed the signal characteristics of Perfusion CT and defined the vessel-around model which includes the vessel and tissue. We proposed a nonparametric vessel-around approach which automatically discriminates the vessel and tissue around vessel from non-interested brain matter stratifying the level of maximum enhancement of pixel-based TAC. The stratification of pixel-based TAC was executed using the mean and standard deviation of the signal intensity of each pixel and mapped to the cerebral flow imaging. The defined vessel-around model was used to show the cerebral flow imaging and to specify the area of markedly reduced perfusion with loss of function of still viable neurons. Perfusion CT is a fast and practical technique for routine clinical application. It provides substantial and important additional information for the selection of the optimal treatment strategy for patients with hyperacute stroke. The vessel-around approach reduces the computation time significantly when compared with the perfusion imaging using the GVF. The proposed cerebral imaging shows reliable results which are validated by physicians and medical staff. Moreover the vessel-around approach was found to be comprehensive and easy-to-interpret by physicians and medical staff, hence we conclude that our proposed vessel-around technique can be used for brain perfusion mapping.
To detect cerebral aneurysms, arterial stenosis, and other vascular anomalies in a brain CT angiography, we propose a novel technique of cerebral vessel visualization by patient motion correction. Our method has the following steps. First, a set of feature points within the skull base is selected using a 3D edge detection technique. Second, a locally weighted 3D distance map is constructed for leading our similarity measure to robust convergence on the maximum value. Third, the similarity measure between feature points is evaluated repeatedly by selective cross-correlation (SCC). Fourth, the 3D bone-vessel masking and subtraction is performed for completely removing bones. Our method has been successfully applied to five different patients datasets with intracranial aneurysms obtained from 16-slice multi-detector row CT scanner. The total processing time of each datasets was less than 20 seconds. The performance of our method was evaluated with the aspects of accuracy and robustness. For accuracy assessment, we showed results of visual inspection in two-dimensional and three-dimensional comparison of a conventional method and the proposed method. While the quality of the conventional method was substantially reduced by patient motion artifacts, our method could keep the quality of the original image. In particular, intracranial aneurysms were well visualized by our method. Experimental results show that our method is clinically promising by the fact that it is very little influenced by image degradation occurred in bone-vessel interface. For all experimental datasets, we can clearly see intracranial aneurysms as well as arteries on the volumetric images.
The pre-integrated volume rendering which produces high-quality images with less sampling has become one of the most efficient and important techniques in volume rendering field. In this paper, we propose an acceleration technique of pre-integrated rendering of dynamically classified volumes. Using the overlapped-min-max block, empty space skipping of ray casting can be applied in pre-integrated volume rendering. In addition, a new pre-integrated lookup table brings much fast rendering of high-precision data without degrading image quality. We have implemented our approaches not only on the consumer graphics hardware but also on CPU, and show the performance gains using several medical data sets.
In this paper, we propose a novel technique of multimodality volume fusion using a graphics hardware. Our 3D texture based volume fusion algorithm consists of three steps: First, two volumes of different modalities are loaded into the texture memory in the GPU. Second, textured slices of two volumes along the same proxy geometry are combined with various compositing functions. Third, all the composited slices are alpha blended. We have implemented our algorithm using HLSL (High Level Shader Language). Our method shows that the exact depth of each volume and the realistic views with interactive rate in comparison with the software-based image integration. Experimental results using MR and PET brain images and the angiography with a stent show that over composting operation is more useful for clinical application.
In this paper, we present a novel technique of improving vessel visualization quality by removing motion artifacts in
digital subtraction brain CT angiography. The proposed methods based on the three key ideas as follows. First, the
method involves the automatic selection of a set of feature points by using a 3D edge detection technique based on
image gradient of mask and contrast volume. Second, locally weighted-3D distance map is generated to derive to robust
convergence on the optimum value. Third, the similarity measure between extracted feature points is evaluated
repeatedly by selective cross-correlation. The proposed method has been successfully applied to pre- and post-contrast
CT angiography based on brain dataset for global and spatial motion correction. The feature point selection, introducing
local processing on areas of interest consisting of voxels belonging to object boundary only, are very fast compared to
all traditional algorithms where entire volume are searched. Since the registration estimates similarity measures between
feature points and derive to robust convergence on the optimum value by the locally weighted-3D distance map, it offers
an accelerated technique to accurately visualize vessels of the brain.
Different imaging modalities give insight to vascular, anatomical and functional information that assists diagnosis and
treatment planning in medicine. Depending on the clinical requirement, it is often not sufficient to consider anatomical
and functional information separately but to superimpose images of different modalities. However it would often
provide unreliable results since functional modalities have low sampling resolution. In this paper, we present a novel
technique of improving an image fusion quality and speed by integrating voxel-based registration and consecutive
visualization. In the first part, we discuss a voxel-based registration using mutual information including gradient
measure to consider spatial information in the images and thereby provide a much more general and reliable measure. In
the second part, we propose a volume rendering technique for generating high-quality images rapidly without
specialized hardware. Fusion of MR and PET brain images are presented for visual validation for the proposed methods.
Our method offers a robust technique to fuse anatomical and functional modalities which allow direct functional to
structural correlation analysis.
To visualize the brain anatomy, seizure focus location and grid and strip electrode in 3-dimensional space provides improved planning information for focus localization and margin determination pre- and intra-operatively. However given the relatively poor spatial resolution and structural detail of the PET images, it can be difficult to recognize precise anatomic localization of the site of increased activation during seizure. In this paper, we present an intensity-based registration and combined visualization of CT, MR and PET brain images to provide both critical functional information and the structural details.
In many medical fields, volume rendering is beneficial. For the effective surgery planning, interior objects as well as the surface must be rendered. However, direct volume rendering is computationally expensive and surface rendering cannot represent inside objects of the volume data. In addition, surface rendering has a disadvantage that the huge amount of generated polygons cannot be easily managed even on high-end graphics workstations. This paper presents a way of generating multi-planar imags efficiently. Multi-planar rendering consists of two parts, surface and cutting plane. To efficiently generate surface, our algorithm uses image- based rendering. Image-based rendering generates an image in constant time regardless of the complexity of the input scene. To speed up the performance, our algorithm uses intermediate image space instead of final image space. To reduce the space complexity , we use a new data structure that is based on delta-tree to represent volume. The algorithm was implemented on a Silicon Graphics Indigo 2 workstation with a single 195 MHz R10000 processor and 192MB Main Memory. For the experiments, we use 3 volume data sets, UNC head, engine and brain. Our algorithm takes 5-20 milliseconds to project reference images to the desired view. Including the warping time, 40 milliseconds are required to generate an image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.