A computer-aided method for finding an optimal imaging plane for simultaneous measurement of the arterial blood
inflow through the 4 vessels leading blood to the brain by phase contrast magnetic resonance imaging is presented. The
method performance is compared with manual selection by two observers. The skeletons of the 4 vessels for which
centerlines are generated are first extracted. Then, a global direction of the relatively less curved internal carotid arteries
is calculated to determine the main flow direction. This is then used as a reference direction to identify segments of the
vertebral arteries that strongly deviates from the main flow direction. These segments are then used to identify
anatomical landmarks for improved consistency of the imaging plane selection. An optimal imaging plane is then
identified by finding a plane with the smallest error value, which is defined as the sum of the angles between the plane's
normal and the vessel centerline's direction at the location of the intersections. Error values obtained using the
automated and the manual methods were then compared using 9 magnetic resonance angiography (MRA) data sets. The
automated method considerably outperformed the manual selection. The mean error value with the automated method
was significantly lower than the manual method, 0.09±0.07 vs. 0.53±0.45, respectively (p<.0001, Student's t-test).
Reproducibility of repeated measurements was analyzed using Bland and Altman's test, the mean 95% limits of
agreements for the automated and manual method were 0.01~0.02 and 0.43~0.55 respectively.
We have developed a new method to segment and analyze retinal layers in optical coherence tomography (OCT) images
with the intent of monitoring changes in thickness of retinal layers due to disease. OCT is an imaging modality that
obtains cross-sectional images of the retina, which makes it possible to measure thickness of individual layers. In this
paper we present a method that identifies six key layers in OCT images. OCT images present challenges to conventional
edge detection algorithms, including that due to the presence of speckle noise which affects the sharpness of inter-layer
boundaries significantly. We use a directional filter bank, which has a wedge shaped passband that helps reduce noise
while maintaining edge sharpness, in contrast to previous methods that use Gaussian filter or median filter variants that
reduce the edge sharpness resulting in poor edge-detection performance. This filter is utilized in a spatially variant
setting which uses additional information from the intersecting scans. The validity of extracted edge cues is determined
according to the amount of gray-level transition across the edge, strength, continuity, relative location and polarity.
These cues are processed according to the retinal model that we have developed and the processing yields edge contours.
Image decomposition using directional filter banks is useful in discovering and extracting edge orientation cues for
target detection in airborne surveillance images. Since images of interest are very large and the filtered images are not
downsampled in the application of interest, conventional filtering can be computationally extremely demanding and
there is a need to explore procedures to make the filtering efficient. In this paper a novel filter bank structure for
directional filtering of images is proposed and its design described. The design is carried out by imposing structural
constraints on the filters, which are implemented using a generalized notion of separable filtering. The structure uses
one-dimensional (1-D) filters as building blocks, which are employed in novel configurations to obtain filters with
narrow wedge-shaped passbands. Design procedures have been developed for constructing 16-band, 32-band, and 64-
band partitions starting with either built-in or user-specified 1-D prototypes. Implementations of filters using the
proposed method show significant improvement compared with conventional implementation, often more by an order of
magnitude, which is also supported by a theoretical analysis of the filter complexity.
In this paper, a small moving object method detection method in video sequence is described. In the first step, the camera motion is eliminated using motion compensation. An adaptive subband decomposition structure is then used to analyze the motion compensated image. In the highband subimages moving objects appear as outliers and they are detected using a statistical detection test based on lower order statistics. It turns out that in general, the distribution of the residual error image pixels is almost Gaussian. On the other hand, the distribution of the pixels in the residual image deviates from Gaussianity in the existence of outliers. By detecting the regions containing outliers the boundaries of the moving objects are estimated. Simulation examples are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.