Three-dimensional (3D) microscopic imaging techniques such as confocal microscopy have become a common tool in
measuring cellular structures. While computer volume visualization has advanced into a sophisticated level in medical
applications, much fewer studies have been made on data acquired by the 3D microscopic imaging techniques. To
optimize the visualization of such data, it is important to consider the data characteristics such as thin data volume. It is
also interesting to apply the new GPU (graphics processing unit) technology to interactive volume rendering of the data.
In this paper, we discuss several texture-based techniques to visualize confocal microscopy data by considering the data
characteristics and with support of GPU. One simple technique generates one set of 2D textures along the axial direction
of image acquisition. An improved technique uses three sets of 2D textures in the three principal directions, and creates
the rendered image via a weighted sum of the images generated by blending the individual texture sets. In addition, we
propose a new approach based on stencil such that textures are blended based on a stencil control. Given the viewing
condition, a texel needs to be drawn only when its corresponding projection on the image plane is inside a stencil area.
Finally, we have explored the use of multiple-channel datasets for flexible classification of objects. These studies are
useful to optimize the visualization of 3D microscopic imaging data.
Automatic segmentation is an essential problem in biomedical imaging. It is still an open problem to automatically
segment biomedical images with complex structures and compositions. This paper proposes a novel algorithm called
Gradient-Intensity Clusters and Expanding Boundaries (GICEB). The algorithm attempts to solve the problem with
considerations of the image properties in intensity, gradient, and spatial coherence in the image space. The solution is
achieved through a combination of using a two-dimensional histogram, domain connectivity in the image space, and
segment region growing. The algorithm has been tested on some real images and the results have been evaluated.
This paper proposes a novel method to register 3D surfaces. Given two surface meshes, we formulate the registration as a problem of optimizing the parameterization of one mesh for the other. The optimal parameterization of the mesh is achieved in two steps. First, we find an initial solution close to the optimal solution. Second, we elastically modify the parameterization to minimize the cost function. The modification of the parameterization is expressed as a linear combination of a relatively small number of low-frequency eigenvectors of an appropriate mesh Laplacian. The minimization of the cost function uses a standard nonlinear optimization procedure that determines the coefficients of the linear combination. Constraints are added so that the parameterization validity is preserved during the optimization. The proposed method extends parametric registration of 2D images to the domain of 3D surfaces. This method is generic and capable of elastically registering surfaces with arbitrary geometry. It is also very efficient and can be fully automatic. We believe that this paper for the first time introduces eigenvectors of mesh Laplacians into the problem of surface registration. We have conducted experiments using real meshes that represent human cortical surfaces and the results are promising.
Modern optical imaging techniques such as confocal and multi-photon microscopy can acquire volumetric datasets of cellular structures. In this paper we propose an approach for interactive volume rendering of such cellular datasets. In the first stage, we create a set of 2D textures corresponding to the image stacks in the original dataset. These textures are generated through a transfer function that maps voxel intensities to colors and opacities, and stored in the texture memory in computer. In the second stage, by blending the textures with hardware support, we can achieve interactive volume rendering including rotation and zooming on regular PCs. Besides, to generate good images for viewing in lateral directions, we use two additional sets of 2D textures for two orthogonal lateral directions and the texture resolutions can be adapted to the rendering requirement and computer hardware. This approach offers an effective visualization environment for biologists to better understand and analyze measured cellular structures.
We propose a new, generic method called POSS (Parameterization by Optimization in Spectral Space) to efficiently obtain parameterizations with low distortions for 3D surface meshes. Given a mesh, first we compute a valid initial parameterization using an available method and then express the optimal solution as a linear combination of the initial parameterization and an unknown displacement term. The displacement term is approximated by a linear combination of the eigenvectors with the smallest eigenvalues of a mesh Laplacian. This approximation considerably reduces the number of unknowns while minimizing the deviation from the optimality. Finally, we find a valid parameterization with low distortion using a standard constrained nonlinear optimization procedure. POSS is fast, flexible, generic, and hierarchical. Its advantage has been confirmed by its application to planar parameterizations of surface meshes that represent complex human cortical surfaces. This method has a promising potential to improve the efficiency of all parameterization techniques which involve constrained nonlinear optimization.
KEYWORDS: Visualization, Volume rendering, Image segmentation, Climatology, Atmospheric modeling, Visual analytics, 3D image processing, 3D modeling, Data modeling, 3D vision
The investigation of the climate system is one of the most exciting areas of scientific research today. In the climate system, oceanic and atmospheric flows play a critical role. Because these flows are very complex in the span of temporal and spatial scales, effective computer visualization techniques are crucial to the analysis and understanding of the flows. However, the existing techniques and software are not sufficient to the demand of visualizing oceanic and atmospheric flows. In this paper, we use a new technique called streamline splatting to visualize 3D flows. This technique integrates streamline generation with the splatting method of volume rendering. It first generates segments of streamlines and then projects and splats the streamline segments onto the image plane. The projected streamline segments can be represented using a Hermite parametric model. Splatted curves are achieved by applying a Gaussian footprint function to the projected streamline segments and the results are blended together. Thus the user can see through a volumetric flow field and obtain a 3D representation view in one image. The proposed technique has been applied to visualizing oceanic and storm flows. This work has potential to be further developed into visualization software for regular PC workstations to help researchers explore and analyze climate flows.
KEYWORDS: Visualization, Volume rendering, 3D visualizations, 3D image processing, Opacity, Particles, Image resolution, Convolution, Image segmentation, Control systems
We present a novel technique called streamline splatting to visualize 3D vector fields interactively. This technique integrates streamline generation with the splatting method of volume rendering. The key idea is to create volumetric streamlines using geometric streamlines and a kernel footprint function. To optimize the rendering speed, we represent the volumetric streamlines in terms of a series of slices perpendicular to the principal viewing direction. Thus 3D volume rendering is achieved by blending all slice textures with support of graphics hardware. This approach allows the user to
visualize 3D vector fields interactively such as by rotation and zooming on regular PCs. This new technique may lead to better understanding of complex structures in 3D vector fields.
Determining the neural connectivity of brain is an essential problem in neuroscience and the fluorescent imaging technique is a very useful to study this problem. In this technique, a real brain (typically of rat) is injected with a fluorescent dye and then sectioned into thin slices. Each slice is then exposed to illumination and a high-resolution image is captured. The areas in a slice that are impacted by the dye generate strong brightness due to fluorescence, and these regions reveal useful information on the neural connectivity. However, it is challenging to automatically register the image series. In this paper, we propose effective methods for the registration of fluorescent neural images. Our approach is based on the edge features of images. First, we use an effective method for edge detection. Then we adopt multi-level pattern recognition using clustering algorithms with the Mahalanobis distance criteria to isolate individual features. Finally, we adopt an elastic registration scheme using the thin-plate spline algorithm to solve the multivariate interpolation problem. Once all images are registered, we apply an elliptic weighted average (EWA) splatting technique for volume visualization. Our rendered results clearly display the 3D structures of the neural connectivity.
Confocal optical microscopy is one of the most significant advances in optical microscopy in the 20th century and has become a widely accepted tool for biological imaging. This technique can obtain 3D volume information through non-invasive optical sectioning and scanning of 2D confocal planes inside the specimen. In this paper, we conduct a physically based computer simulation of light scattering and propagation in the biological specimen during the imaging process. We implement an efficient Monte Carlo technique to simulate light scattering by biological particles, trace the entire light propagation within the scattering medium, produce fluorescence at the fluorescent dyes, and record light intensity collected at the detector. This study will not only help to verify analytic modeling of light scattering in biological media, but also be useful to improve the design of optical imaging systems.
Studies in experimental neuroscience have found some evidence showing that the shapes of cortical surfaces of human brains might have certain connection with the neural functioning. This paper presents a morphological study of the cortical surfaces. The work consists of four major elements. First, we collect a sufficient number of 3D MRI datasets of brains that belong to different categories of people. Second, we extract the cortical surfaces from the 3D MRI datasets. Third, we apply statistical analysis to characterize the morphological features of the cortical surfaces. The last component is 3D visualization to illustrate the shapes and characteristics of cortical surfaces in an interactive environment.
This paper proposes an accurate, compact, and generic method for representing spectral functions. The focus is on smooth functions, the case of most natural spectra. While pursuing the idea of using Fourier series expansion for its advantage in representation generality, we attempt to remove the problem of Gibbs phenomenon. The solution that we propose is a new method called symmetric extension. Given a smooth spectral function S1, we first generate a new function S2 which is a mirror reflection of S1 about the upper bound of the wavelength domain. Then we create another function U that merges S1 and S2, and apply Fourier expansion to U. Because the values of U at its boundaries are equal, Gibbs oscillation is largely reduced. Besides, since U is self symmetric, all sine terms in Fourier expansion vanish and therefore we only need to keep the cosine coefficients. These make our method not only accurate, but also compact. We have tested the method with a large number of real spectra of various types, and compared with the existing methods such as direct Fourier expansion and linear model. The numerical results have confirmed the advantages of the proposed method.
In this paper, we propose a computer-assisted approach for spectral design and synthesis. This approach starts with some initial spectrum, modifies it interactively, evaluates the change, and decides the optimal spectrum. Given a requested change as function of wavelength, we model the change function using a Gaussian function. When there is the metameric constraint, from the Gaussian function of request change, we propose a method to generate the change function such that the result spectrum has the same color as the initial spectrum. We have tested the proposed method with different initial spectra and change functions, and implemented an interactive graphics environment for spectral design and synthesis. The proposed approach and graphics implementation for spectral design and synthesis can be helpful for a number of applications such as lighting of building interiors, textile coloration, and pigment development of automobile paints, and spectral computer graphics.
A fundamental problem in imaging science and engineering is to characterize wave scattering from a small region of surface or volume. This behavior is generally described by a multidimensional scattering function. This paper proposes a new representation method of scattering functions to optimize data compression. Our method first performs a Fourier transform in the wavelength dimension and then spherical harmonic transform for each Fourier coefficient in the
dimensions for spatial directions. The representation errors are studied numerically for using different levels of spherical harmonics and different numbers of Fourier components. This method has the advantage of efficiently storing data of scattering functions and has a great potential of applications in imaging science and engineering.
In confocal microscopy images, a common observation is that lower image stacks have lower voxel intensities and are usually blurred in comparison with the upper ones. The key reasons are light absorption and scattering by the objects and particles in the volume through which light passes. This paper proposes a new technique to reduce such noise impacts in terms of an adaptive intensity compensation and image-sharpening algorithm. With these image-processing procedures, advanced 3D volume-rendering techniques can be applied to more faithfully visualize confocal microscopy images.
This paper proposes a new approach to constructing spectra for colors based on measured spectra. Reflectances of a set of 1,400 color samples are measured using a spectroradiometer. Given any color, its spectrum is generated in terms of a tightly enclosing tetrahedron formed by measured color points. A method is also proposed to enlarge the span region of the base spectra for spectral generation. Using our approach, the derived spectra correspond closely to the reality and the deriving operation works almost for any color.
Modeling light reflection from rough surfaces is an essential problem in computer graphics, computational vision, and multispectral imaging. Existing methods commonly separate the total reflection into diffuse and specular components, but this leads to the nonphysical arbitrariness in choosing the relative weights for the two components. There also lacks a sufficient model for the self-shadowing effect, which is important for rough surfaces. To eliminate these drawbacks, we propose a new reflection model entirely using physical parameters. The surfaces are assumed homogeneous, isotropic, and microscopically smooth, and their height probability densities are assumed Gaussian. Thus we derive the one-bounce reflection through Fresnel coefficient, self-shadowing factor, and probability function for surface orientation. The shadowing factor is calculated analytically from the statistical properties of a rough surface, including the height probability density and correlation function, and it agrees well with numerical simulation. Since all involved parameters in this model are physical, it can be easily verified with measurement. Besides, as a single term, this model generates a sharp specular highlight when a surface is smooth and shows diffuse behavior when the surface is rough. This advantage will be shown through rendered images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.