Thermal and visible cameras can be characterized by their Point Spread Function (PSF), which captures the aberrations induced by the image formation process, which includes effects due to diffraction or motion. Various techniques for estimating the PSF based on a simple image of a target object that consists of a random pattern were shown to be effective. Here, we describe a computational pipeline for estimating parametric Gaussian PSFs characterized by their width, height, and orientation, based on binary random pattern targets that are suitable for thermal imaging and easy to manufacture. Specifically, we consider the influence of deviating from a strict random pattern so the targets can be manufactured with common cutting or 3D printing devices. We evaluate the estimation accuracy based on simulated patterns with varying dot, pitch, and target sizes for different values of the point spread function parameters. Finally, we show experimental examples of acquired on manufactured devices. Our results indicate that the proposed random pattern targets offer a simple and affordable approach to estimating local PSFs.
Handheld thermal imaging devices can capture images in quick succession, each image with a slightly different orientation, resulting in image series that can be combined to produce an improved image through multi-image deconvolution. Implementing deconvolution algorithms that take advantage of the entire information contained in the image series to produce an image with a field of view that is as large as possible given the coverage of all collected images is a challenging problem given that the image series covers a possibly non-square area. In this paper, we present a multi-image deconvolution method that addresses this boundary condition problem. First, we determine the relative geometric transformations between the images to determine a rectangular canvas that can accommodate the full field of view covered by the image series. Next, we formulate the deconvolution problem as a regularized minimization problem with two terms, (i) the residue between the forward transformation applied to the reconstructed candidate and the measured images and (ii) a regularization term to take into account image priors. In order to accommodate for a non-square coverage of the combined images, which results in boundary artifacts when the forward model is used during iterative minimization, we recast the problem into one where the original scene is masked, thereby mitigating the effects of unknown image values beyond image boundaries. We characterize our method on both synthetic and experimental images. We observe both visual and quantitative improvements of the images at the boundaries where distortions are attenuated.
Thermal image formation can be modeled as the convolution of an ideal image with a point spread function (PSF) that characterizes the optical degradations. Although simple space-invariant models are sufficient to model diffraction-limited optical systems, they cannot capture local variations that arise because of nonuniform blur. Such degradations are common when the depth of field is limited or when the scene involves motion. Although space-variant deconvolution methods exist, they often require knowledge of the local PSF. In this paper, we adapt a local PSF estimation method (based on a learning approach and initially designed for visible light microscopy) to thermal images. The architecture of our model uses a ResNet-34 convolutional neural network (CNN) that we trained on a large thermal image dataset (CVC-14) that we split in training, tuning, and evaluation subsets. We annotated the sets by synthetically blurring sharp patches in the images with PSFs whose parameters covered a range of values, thereby producing pairs of sharp and blurred images, which could be used for supervised training and ground truth evaluation. We observe that our method is efficient at recovering PSFs when their width is larger than the size of a pixel. The estimation accuracy depends on the careful selection of training images that contain a wide range of spatial frequencies. In conclusion, while local PSF parameter estimation via a trained CNN can be efficient and versatile, it requires selecting a large and varied training dataset. Local deconvolution methods for thermal images could benefit from our proposed PSF estimation method.
Many applications rely on thermal imagers to complement or replace visible light sensors in difficult imaging conditions. Recent advances in machine learning have opened the possibility of analyzing or enhancing images, yet these methods require large annotated databases. Training approaches that leverage data augmentation via simulated and synthetically-generated images could offer promising prospects. Here, we report on a method that uses generative adversarial nets (GANs) to synthesize images of a complementary contrast. Starting from a dual-modality dataset of co-registered visible and thermal images, we trained a GAN to generate synthetic thermal images from visible images and vice versa. Our results show that the procedure yields sharp synthesized images that might be used to augment dual-modality datasets or assist in visual interpretation, yet are also subject to the limitations imposed by contrast independence between thermal and visible images.
Significance: Despite recent developments in microscopy, temporal aliasing can arise when imaging dynamic samples. Modern sampling frameworks, such as generalized sampling, mitigate aliasing but require measurement of temporally overlapping and potentially negative-valued inner products. Conventional cameras cannot collect these directly as they operate sequentially and are only sensitive to light intensity.
Aim: We aim to mitigate aliasing in microscopy of dynamic monochrome samples by implementing generalized sampling via the use of a color camera and modulated color illumination.
Approach: We solve the overlap problem by spectrally multiplexing the acquisitions and using (positive) B-spline segments as projection kernels. Reconstruction involves spectral unmixing and inverse filtering. We implemented this method using a color LED illuminator. We evaluated its performance by imaging a rotating grid and its applicability by imaging the beating zebrafish embryo heart in transmission and light-sheet microscopes.
Results: Compared to stroboscopic imaging, our method mitigates aliasing with performance improving as the projection order increases. The approach can be implemented in conventional microscopes but is limited by the number of available LED colors and camera channels.
Conclusions: Generalized sampling can be implemented via color modulation in microscopy to mitigate temporal aliasing. The simple hardware requirements could make it applicable to other optical imaging modalities.
Generalized sampling is a flexible framework for signal acquisition, which relaxes the need for ideal pre-filters. Nevertheless, implementation remains challenging for dynamic imaging applications because it requires simultaneously measuring multiple overlapping inner-products and because only positive signals (intensities) can be measured by cameras. We present a method to collect videos of monochromatic objects by projecting the incoming signal at each pixel in a temporal B-spline space of degree 0, 1, or 2 by using a conventional RGB camera and a modulated three-color light source for illumination. Specifically, we solve the basis function overlap problem by multiplexing the acquisition in different color ranges and use B-spline pieces (which are positive) as projection kernels of a biorthogonal projection-expansion bases pair. The steps to recover signal samples include spectral unmixing and inverse filtering. Reconstructions we obtained from simulated and experimentally-acquired microscopy data demonstrate the feasibility of our approach.
3D deconvolution in optical wide eld microscopy aims at recovering optical sections through thick objects. Acquiring data from multiple, mutually-tilted directions helps ll the missing cone of information in the optical transfer function, which normally renders the deconvolution problem particularly ill-posed. Here, we propose a fast-converging iterative deconvolution method for multi-angle deconvolution microscopy. Specically, we formulate the imaging problem using a lter-bank structure, and present a multi-channel variation of a thresholded Landweber deconvolution algorithm with wavelet-sparsity regularization. Decomposition of the minimization problem into subband-dependent terms ensures fast convergence. We demonstrate the applicability of the algorithm via simulation results.
Digital holography plays an increasingly important role for biomedical imaging; it has particularly low invasiveness
and allows quantitatively characterizing both amplitude and phase of propagating wave fronts. Fresnelets
have been introduced as both a conceptual and practical tool to reconstruct digital holograms, simulate the
propagation of monochromatic waves, or compress digital holograms. Propagating wavefronts that have a sparse
representation in a traditional wavelet basis in their originating plane have a similarly sparse representation in
propagation-distance-dependent Fresnelet bases. Although several applications have been reported, no implementation
has been made widely available. Here we describe a Matlab-based Fresnelets toolbox that provides a
set of user-friendly functions to implement the Fresnelet transform.
Multi-modal microscopy, such as combined bright-fiel and multi-color fluorescenc imaging, allows capturing a sample's
anatomical structure, cell dynamics, and molecular activity in distinct imaging channels. However, only a limited number
of channels can be acquired simultaneously and acquiring each channel sequentially at every time-point drastically reduces
the achievable frame rate. Multi-modal imaging of rapidly moving objects (such as the beating embryonic heart), which
requires high frame-rates, has therefore remained a challenge. We have developed a method to temporally register multimodal,
high-speed image sequences of the beating heart that were sequentially acquired. Here we describe how maximizing
the mutual information of time-shifted wavelet coefficien sequences leads to an implementation that is both accurate and
fast. Specificall , we validate our technique on synthetically generated image sequences and show its effectiveness on
experimental bright-fiel and fluorescenc image sequences of the beating embryonic zebrafis heart. This method opens
the prospect of cardiac imaging in multiple channels at high speed without the need for multiple physical detectors.
Congenital cardiovascular defects are very common, occurring in 1% of live births, and cardiovascular failures are the leading cause of birth defect-related deaths in infants. To improve diagnostics, prevention and treatment of cardiovascular abnormalities, we need to understand not only how cells form the heart and vessels but also how physical factors such as heart contraction and blood flow influence heart development and changes in the circulatory network. Mouse models are an excellent resource for studying cardiovascular development and disease because of the resemblance to humans, rapid generation time, and availability of mutants with cardiovascular defects linked to human diseases. In this work, we present results on development and application of Doppler Swept Source Optical Coherence Tomography (DSS-OCT) for imaging of cardiovascular dynamics and blood flow in the mouse embryonic heart and vessels. Our studies demonstrated that the spatial and temporal resolution of the DSS-OCT makes it possible to perform sensitive measurements of heart and vessel wall movements and to investigate how contractile waves facilitate the movement of blood through the circulatory system.
The reconstruction of images from projections, diffraction fields, or other similar measurements requires applying signal
processing techniques within a physical context. Although modeling of the acquisition procedure can conveniently be
carried out in the continuous domain, actual reconstruction from experimental measurements requires the derivation of
discrete algorithms that are accurate, efficient, and robust. In recent years, wavelets and multiresolution approaches have
been applied successfully for common image processing tasks bridging the gap between discrete and continuous representations.
We show that it is possible to express many physical problems in a wavelet framework, thereby allowing the
derivation of efficient algorithms that take advantage of wavelet properties, such as multiresolution structure, sparsity, and
space-frequency decompositions. We review several examples of such algorithms with applications to X-ray tomography,
digital holography, and confocal microscopy and discuss possible future extensions to other modalities.
Confocal microscopy enables us to track myocytes in the embryonic zebrafish heart. The Zeiss LSM 5 Live high speed confocal microscope has been used to take optical sections (at 3 μm intervals and 151 frames per second) through a fluorescently labeled zebrafish heart at two developmental stages (26 and 34 hours post fertilization (hpf)). This data provides unique information allowing us to conjecture on the morphology and biomechanics of the developing vertebrate heart. Nevertheless, the myocytes, whose positions could be determined in a reliable manner, were located sparsely and mostly in one side of the heart tube. This difficulty was overcome using computational methods, that give longitudinal, radial and circumferential displacements of the myocytes as well as their contractile behavior. Applied strain analysis has shown that in the early embryonic heart tube, only the caudal region (near the in-flow) and another point in the middle of the tube can be active; the rest appears to be mostly passive. This statement is based on the delay between major strain and displacement which a material point experiences. Wave-like propagation of all three components of the displacement, especially in the circumferential direction, as well as the almost-periodic changes of the maximum strain support the hypothesis of helical muscle structure embedded in the tube. Changes of geometry in the embryonic heart after several hours are used to verify speculations about the structure based on the earlier images and aforementioned methods.
KEYWORDS: Signal to noise ratio, Digital holography, Reconstruction algorithms, Holograms, Gold, Amplifiers, Phase retrieval, 3D image reconstruction, Charge-coupled devices, Cameras
Three-dimensional information about an object, such as its depth, may be captured and stored digitally in a single, two-dimensional, real- valued hologram acquired in an off-axis geometry. Digital reconstruction of the hologram permits the quantitative retrieval of depth data and object position, or allows post-acquisition focusing on selected scenes. Over the past few decades, a number of reconstruction algorithms have been proposed to perform this task in various experimental conditions and for different purposes (metrology, imaging, etc.). Here, we aim at providing guidelines for deciding which algorithm to apply to a given problem. We evaluate reconstruction procedures based on criteria such as reconstruction quality and computational complexity. We propose a simulation procedure of the acquisition process, that allows us to compare a large body of experimental situations and, because the ground truth is known, achieve quantitative comparison.
With the availability of new confocal laser scanning microscopes, fast biological processes, such as the blood flow in living organisms at early stages of the embryonic development, can be observed with unprecedented time resolution. When the object under study has a periodic motion, e.g. a beating embryonic heart, the imaging capabilities can be extended to retrieve 4D data. We acquire nongated slice-sequences at increasing depth and retrospectively synchronize them to build dynamic 3D volumes. Here, we present a synchronization procedure based on the temporal correlation of wavelet features. The method is designed to handle large data sets and to minimize the influence of artifacts that are frequent in fluorescence imaging techniques such as bleaching, nonuniform contrast, and photon-related noise.
KEYWORDS: Heart, Wavelets, Cardiac imaging, Point spread functions, Confocal microscopy, Data acquisition, Microscopes, Wavelet transforms, In vivo imaging, 3D image processing
Being able to acquire, visualize, and analyze 3D time series (4D data) from living embryos makes it possible to understand complex dynamic movements at early stages of embryonic development. Despite recent technological breakthroughs in 2D dynamic imaging, confocal microscopes remain quite slow at capturing optical sections at successive depths. However, when the studied motion is periodic—such as for a beating heart—a way to circumvent this problem is to acquire, successively, sets of 2D+time slice sequences at increasing depths over at least one time period and later rearrange them to recover a 3D+time sequence. In other imaging modalities at macroscopic scales, external gating signals, e.g., an electro-cardiogram, have been used to achieve proper synchronization. Since gating signals are either unavailable or cumbersome to acquire in microscopic organisms, we have developed a procedure to reconstruct volumes based solely on the information contained in the image sequences. The central part of the algorithm is a least-squares minimization of an objective criterion that depends on the similarity between the data from neighboring depths. Owing to a wavelet-based multiresolution approach, our method is robust to common confocal microscopy artifacts. We validate the procedure on both simulated data and in vivo measurements from living zebrafish embryos.
The combination of wavelength multiplexing and spectral interferometry allows for the encoding of multidimensional information and its transmission over a mono-dimensional channel; for example, measurements of a surface's topography acquired through a monomode fiber in a small endoscope. The local depth of the imaged object is encoded in the local spatial frequency of the signal measured at the output of the fiber-decoder system. We propose a procedure to retrieve the depth-map by determining the signal's instantaneous frequency. First, we compute its continuous, complex-valued, wavelet transform (CWT). The frequency signature at every position is contained in the resulting scalogram. We then extract the ridge of maximal response by use of a dynamic programming algorithm thus directly recovering the object's topography. We present results that validate this procedure based on both simulated and experimental data.
We present a zero-order and twin image elimination algorithm for digital Fresnel holograms that were acquired in an off-axis geometry. These interference terms arise when the digital hologram is
reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a wavelet-like transform that uses basis functions tailor-made for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing.
We present a numerical two-step reconstruction procedure for digital off-axis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the least-squares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples.
We consider using spline interpolation to improve the standard filtered backprojection (FBP) tomographic reconstruction algorithm. In particular, we propose to link the design of the filtering operator with the interpolation model that is applied to the sinogram. The key idea is to combine the ramp filtering and the spline fitting process into a single filtering operation. We consider three different approaches. In the first, we simply adapt the standard FBP for spline interpolation. In the second approach, we replace the interpolation by an oblique projection onto the same spline space; this increases the peak signal-noise ratio by up to 2.5 dB. In the third approach, we perform an explicit discretization by observing that the ramp filter is equivalent to a fractional derivative operator that can be evaluated analytically for splines. This allows for an exact implementation of the ramp filter and improves the image quality by an additional 0.2 dB. This comparison is unique as the first method has been published only for degree n=0, whereas the two other methods are novel. We stress that the modification of the filter improve the reconstruction quality especially at low (faster) interpolation degrees n=0,1; the difference between the methods become marginal for cubic or higher degrees (n ≥ 3).
KEYWORDS: Wavelets, Holograms, Digital holography, Signal processing, Fourier transforms, 3D image reconstruction, Diffraction, Convolution, CCD cameras, Signal detection
We present a new class of wavelet bases---Fresnelets---which is obtained by applying the Fresnel transform operator to a wavelet basis of L2. The thus constructed wavelet family exhibits properties that are particularly useful for analyzing and processing optically generated holograms recorded on CCD-arrays. We first investigate the multiresolution properties (translation, dilation) of the Fresnel transform that are needed to construct our new wavelet. We derive a Heisenberg-like uncertainty relation that links the localization of the Fresnelets with that of the original wavelet basis. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. We conclude that the Fresnel B-splines are particularly well suited for processing holograms because they tend to be well localized in both domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.