Guest Editors Marija Strojnik, Wen Chen, Sarath Gunapala, Joern Helbert, Esteban Vera, and Eric Shirley introduce the Special Section on Advanced Infrared Technology and Remote Sensing Applications II.
Infrared (IR) imaging systems have sensor and optical limitations that result in degraded imagery. Apart from imperfect optics and the finite detector size being responsible for introducing blurring and aliasing, the detector fixed-pattern noise also adds a significant layer of degradation in the collected imagery. Here, we propose a single-shot super-resolution method that compensates for the nonuniformity noise of long-wave IR imaging systems. The strategy combines wavefront modulation and a reconstruction methodology based on total variation and nonlocal means regularizers to recover high-spatial frequencies while reducing noise. In simulations and experiments, we demonstrate a clear improvement of up to 16× in image resolution while significantly decreasing the fixed-pattern noise in the reconstructed images.
In this work, we evaluate a especially crafted deep convolutional neural network to provide with estimations of the wavefront aberration modes directly from pyramidal wavefront sensor (PyWFS) images. Overall, the use of deep neural networks allow to improve the estimation performance as well as the operational range of the PyWFS, especially when considering cases of strong turbulence or bad seeing ratios D0/r0. Our preliminary results provide with evidence that by using neural nets, instead of the classic linear estimation methods, we can obtain a low modulation sensitivity response while extending the linearity range of the PyWFS, reducing the residual variance by a factor of 1.6 when dealing with a r0 as low as a few centimeters.
We present the design and implementation of an adaptive optics test bench recently built at the School of Electrical Engineering of the Pontificia Universidad Católica de Valparaíso in Chile. The flexible design of the PULPOS bench incorporates state-of-the-art, high-speed spatial light modulators for atmospheric turbulence emulation and wavefront correction, a deformable mirror for modulation, and a variety of wavefront sensors such as a pyramid wavefront sensor. PULPOS serves as a platform for research on adaptive optics and wavefront reconstruction using artificial intelligence techniques, as well as for educational purposes.
We have recently proposed the deep learning wavefront sensor, capable of directly estimating Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. However, deep neural networks demand an intensive training stage, where more training examples allow to improve the accuracy and increase the amount of the estimated Zernike modes. Since low order aberrations such as tip and tilt only produce space-invariant motion of the PSF, we propose to treat tip and tilt estimation separately when training the deep learning wavefront sensor, decreasing the training efforts while keeping the wavefront sensor performance. In this paper, we also introduce and test simpler architectures for deep learning wavefront sensing, while exploring the impact of reducing the number of pixels to estimate a given amount of Zernike coefficients. Our preliminary results indicate that we can achieve a significant prediction speedup aiming for real time adaptive optics systems.
Snapshot compressive imaging aims to capture high resolution images using low resolution detectors. The challenge is the generation of simultaneous optical projections that fulfill the compressed sensing reconstruction requirements. We propose the use of controlled aberrations through wavefront coding to produce point spread functions that can simultaneously code and multiplex the scene in a variety of ways. Apart from light efficiency, we can analytically characterize the system matrix response. We explore combinations of Zernike modes and analyze the corresponding coherence parameter. Simulation results using natively sparse and natural scenes demonstrate the feasibility of using controlled aberrations for compressive imaging.
We have previously introduced a high throughput multiplexing computational spectral imaging device. The device measures scalar projections of pseudo-arbitrary spectral filters at each spatial pixel. This paper discusses simulation and initial experimental progress in performing computational spectral unmixing by taking advantage of the natural sparsity commonly found in the fractional abundances. The simulation results show a lower unmixing error compared to traditional spectral imaging devices. Initial experimental results demonstrate the ability to directly perform spectral unmixing with less error than multiplexing alone.
To support the statistical analysis of x-ray threat detection, we developed a very high-throughput x-ray modeling framework based upon GPU technologies and have created three different versions focusing on transmission, scatter, and phase. The simulation of transmission imaging is based on a deterministic photo-absorption approach. This initial transmission approach is then extended to include scatter effects that are computed via the Born approximation. For phase, we modify the transmission framework to propagate complex ray amplitudes rather than radiometric quantities. The highly-optimized NVIDIA OptiX API is used to implement the required ray-tracing in all frameworks, greatly speeding up code execution. In addition, we address volumetric modeling of objects via a hierarchical representation structure of triangle-mesh-based surface descriptions. We show that the x-ray transmission and phase images of complex 3D models can be simulated within seconds on a desktop computer, while scatter images take approximately 30-60 minutes as a result of the significantly greater computational complexity.
The monochromatic single frame pixel count of a camera is limited by diffraction to the space-bandwidth product, roughly the aperture area divided by the square of the wavelength. We have recently shown that it is possible to approach this limit using multiscale lenses for cameras with space bandwidth product between 1 and 100 gigapixels. When color, polarization, coherence and time are included in the image data cube, camera information capacity may exceed 1 petapixel/second. This talk reviews progress in the construction of DARPA AWARE gigapixel cameras and describes compressive measurement strategies that may be used in combination with multiscale systems to push camera capacity to near physical limits.
Traditional approaches to persistent surveillance generate prodigious amounts of data, stressing storage, communication,
and analysis systems. As such, they are well suited for compressed sensing (CS) concepts. Existing
demonstrations of compressive target tracking have utilized time-sequences of random patterns, an approach
that is sub-optimal for real world dynamic scenes. We have been investigating an alternative architecture that
we term SCOUT-the Static Computational Optical Undersampled Tracker-which uses a pair of static masks
and a defocused detector to acquire a small number of measurements in parallel. We will report on our working
prototypes that have demonstrated successful target tracking at 16x compression.
The DARPA MOSAIC program applies multiscale optical design (shared objective lens and parallel array of microcameras)
to the acquisition of high pixel count images. Interestingly, these images present as many challenges
as opportunities. The imagery is acquired over many slightly overlapping fields with diverse focal, exposure and
temporal parameters. Estimation of a consensus image, display of imagery at human-comprehensible resolutions,
automated anomaly detection to guide viewer attention, and power management in a distributed electronic environment
are just a few of the novel challenges that arise. This talk describes some of these challenges and
presents progress to date.
In this paper, a novel color space transform is presented. It is an adaptive transform based on the application of
independent component analysis to the RGB data of an entire color image. The result is a linear and reversible
color space transform that provides three new coordinate axes where the projected data is as much as statistically
independent as possible, and therefore highly uncorrelated. Compared to many non-linear color space transforms
such as the HSV or CIE-Lab, the proposed one has the advantage of being a linear transform from the RGB
color space, much like the XYZ or YIQ. However, its adaptiveness has the drawback of needing an estimate of
the transform matrix for each image, which is sometimes computationally expensive for larger images due to the
common iterative nature of the independent component analysis implementations. Then, an image subsampling
method is also proposed to enhance the novel color space transform speed, efficiency and robustness. The new
color space is used for a large set of test color images, and it is compared to traditional color space transforms,
where we can clearly visualize its vast potential as a promising tool for segmentation purposes for example.
The Gemini telescopes were designed to be infrared-optimized. Among the features specified for optimal performance is the use of silver-based coatings on the mirrors. The feasibility study contracted by Gemini in 1994-1995 provided both techniques and recipes to apply these high-reflectivity and low-emissivity films. All this effort is now being implemented in our coating plants. At the time of the study, sputtering experiments showed that a reflectivity of 99.1% at 10μm was achievable. We have now produced bare and protected silver sputtered films in our coating plants and conducted environmental testing, both accelerated and in real-life conditions, to assess the durability. We have also already applied, for the first time ever, protected-silver coatings on the main optical elements (M1, M2 and M3) of an 8-m telescope. We report here the progress to date, the performance of the films, and our long-term plans for mirror coatings and maintenance.
The non-uniform response in infrared focal plane array (IRFPA)
detectors produces corrupted images with a fixed-pattern noise. In
this paper we present an enhanced adaptive scene-based
non-uniformity correction (NUC) technique. The method
simultaneously estimates detector's parameters and performs the
non-uniformity compensation using a neural network approach. In
addition, the proposed method doesn't make any assumption on the
kind or amount of non-uniformity presented on the raw data. The
strength and robustness of the proposed method relies in avoiding
the presence of ghosting artifacts through the use of optimization
techniques in the parameter estimation learning process, such as:
momentum, regularization, and adaptive learning rate. The proposed
method has been tested with video sequences of simulated and real
infrared data taken with an InSb IRFPA, reaching high correction
levels, reducing the fixed pattern noise, decreasing the ghosting,
and obtaining an effective frame by frame adaptive estimation of
each detector's gain and offset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.