The explosive growth of satellites in low Earth orbit (LEO) demands advanced surveillance and communication capabilities. However, atmospheric turbulence hinders high-resolution imaging and high-speed communication through optical wavelengths, which remains the only viable option. SEETRUE (Sharp wavefront sEnsing for adaptivE opTics in gRound-based satellite commUnications and spacE surveillance), proposes a game-changing solution: cost-effective, AI-driven wavefront sensing for Adaptive Optics (AO) in optical ground stations. It features a unique ground station equipped with a 50 cm robotic telescope with AO capability, a multi-purpose 38 cm binocular telescope, and an atmospheric profiling system. AI-powered wavefront sensors (WFS) within the system leverage novel turbulence models and a revolutionary ”end-to-end” design approach to maximize information extraction. This enables compact and low-cost AO solutions, overcoming a major barrier to widespread adoption. Paving the way for a future with accessible and affordable space communication and surveillance for all.
The new generation of extremely large telescope (ELT) introduces many challenges in optics and engineering. A key challenge is the development of an adaptive optics system able to handle elongated laser guide star (ELGS). Classic wavefront sensor (WFS), such as the shack-hartmann wavefront sensor (SHWFS) or pyramidal wavefront sensor (PyWFS), are not able to readily handle elongated stars, which gets worse when the atmospheric turbulence becomes stronger. In this work, we present a novel complex field wavefront sensor (CFWFS) that can reconstruct the phase and amplitude of the extended bodies at the image plane, and then it is able to recover the turbulent phase at the pupil plane. The proposed WFS scheme uses a four times faster parallel phase retrieval algorithm with only eight designed coded aperture (DCA) that is designed using sphere packing coded apertures (SPCA). We present a collection of encouraging preliminary simulation results.
The Pyramid Wavefront Sensor (PWFS) is one of the preferred choices for measuring wavefront aberrations for adaptive optics in highly sensitive astronomical applications. Despite its inherent high sensitivity, it has a low linearity that degrades the operational range of the phase estimation. This problem has been solved by optically modulating the PSF across the pyramid. However, modulation requires movable physical parts, requiring additional calibration while degrading the sensitivity in exchange for linearity. We created an End-To-End (E2E) trainable scheme that includes the PWFS model of propagation, an optical diffractive layer at the Fourier plane, and a state-of-the-art deep neural network that performs wavefront reconstruction. The joint training routine for the physical and digital trainable elements is conducted under a variety of atmospheric conditions simulated at different strengths along with its nth-Zernike decomposition for further comparison with the ones estimated by our model. We develop a variety of training schemes, varying turbulence ranges and balance between optical and digital layers. In this way, simulation results show an overall improvement in wavefront estimation even beyond the trained turbulence ranges, improving linearity while trying to maintain the sensitivity at weak turbulence, surpassing previous results that considered only one diffractive element and linear wavefront estimation. We are currently performing experimental closed-loop adaptive optics tests, while simulations are displaying encouraging results.
Tip-tilt correction is essential for modern adaptive optics systems. However, fast steering mirrors are costly and may be limited in speed as the size of the mirror grows. We propose the use and control of a novel fast steering mirror (FSM) based on carbon fiber mirrors. We design the system with 3 voice coil actuators mounted using rapid prototyping, enabling precise motion control via a microcontroller. We have especially crafted a 1-inch carbon fiber mirror that weighs significantly less than conventional glass mirrors, delivering faster dynamics for the actuator motion. We report about the building and preliminary characterization of the static and dynamic behavior of the tip-tilt mirror.
The rise of extreme Large Telescopes (ELTs) poses challenges for high-resolution phase map reconstruction. Despite the pyramid wavefront sensor (PyWFS) promise, its inherent non-linearity is limiting. This study proposes techniques to enhance the non-modulated PyWFS linearity through deep learning, comparing convolutional Neural Networks (CNNs) models (Xception, WFNet, ConvNext) with the transformer model Global Context Vision Transformers (GCViT). Results favor transformers, highlighting CNN limitations near pupil borders. Experimental validation on the PULPOS optical bench underscores the GCViT robustness. Trained solely on simulated data under varied SNR and D/r0 conditions, our approach enables to accurately close the AO loop in a real system and leave behind the reconstruction paradigm based on the interaction matrix. We demonstrate the high performance of the GCViT in closed loop obtaining a Strehl ratio over 0.6 for strong turbulence and nearly 0.95 for weak turbulence on the PULPOS optical bench.
Adaptive optics (AO) is crucial for extreme Large Telescopes (ELTs), and its core of operation lies in the use of wavefront sensors. Although the Shack Hartmann and Pyramid Wavefront Sensors are more common, the axicon wavefront sensor (AxWFS) is a less-explored alternative, where the light is projected onto the detector over a doughnut-shape area. This study introduces a groundbreaking enhancement, employing a state-of-the-art deep neural network to perform wavefront estimation from the intensity changes within the ring produced by the axicon under different turbulence conditions, without requiring any optical modulation.
Guest Editors Marija Strojnik, Wen Chen, Sarath Gunapala, Joern Helbert, Esteban Vera, and Eric Shirley introduce the Special Section on Advanced Infrared Technology and Remote Sensing Applications II.
Infrared (IR) imaging systems have sensor and optical limitations that result in degraded imagery. Apart from imperfect optics and the finite detector size being responsible for introducing blurring and aliasing, the detector fixed-pattern noise also adds a significant layer of degradation in the collected imagery. Here, we propose a single-shot super-resolution method that compensates for the nonuniformity noise of long-wave IR imaging systems. The strategy combines wavefront modulation and a reconstruction methodology based on total variation and nonlocal means regularizers to recover high-spatial frequencies while reducing noise. In simulations and experiments, we demonstrate a clear improvement of up to 16× in image resolution while significantly decreasing the fixed-pattern noise in the reconstructed images.
In this work, we evaluate a especially crafted deep convolutional neural network to provide with estimations of the wavefront aberration modes directly from pyramidal wavefront sensor (PyWFS) images. Overall, the use of deep neural networks allow to improve the estimation performance as well as the operational range of the PyWFS, especially when considering cases of strong turbulence or bad seeing ratios D0/r0. Our preliminary results provide with evidence that by using neural nets, instead of the classic linear estimation methods, we can obtain a low modulation sensitivity response while extending the linearity range of the PyWFS, reducing the residual variance by a factor of 1.6 when dealing with a r0 as low as a few centimeters.
We present the design and implementation of an adaptive optics test bench recently built at the School of Electrical Engineering of the Pontificia Universidad Católica de Valparaíso in Chile. The flexible design of the PULPOS bench incorporates state-of-the-art, high-speed spatial light modulators for atmospheric turbulence emulation and wavefront correction, a deformable mirror for modulation, and a variety of wavefront sensors such as a pyramid wavefront sensor. PULPOS serves as a platform for research on adaptive optics and wavefront reconstruction using artificial intelligence techniques, as well as for educational purposes.
We have recently proposed the deep learning wavefront sensor, capable of directly estimating Zernike coefficients of aberrated wavefronts from a single intensity image by using a convolutional neural network. However, deep neural networks demand an intensive training stage, where more training examples allow to improve the accuracy and increase the amount of the estimated Zernike modes. Since low order aberrations such as tip and tilt only produce space-invariant motion of the PSF, we propose to treat tip and tilt estimation separately when training the deep learning wavefront sensor, decreasing the training efforts while keeping the wavefront sensor performance. In this paper, we also introduce and test simpler architectures for deep learning wavefront sensing, while exploring the impact of reducing the number of pixels to estimate a given amount of Zernike coefficients. Our preliminary results indicate that we can achieve a significant prediction speedup aiming for real time adaptive optics systems.
Snapshot compressive imaging aims to capture high resolution images using low resolution detectors. The challenge is the generation of simultaneous optical projections that fulfill the compressed sensing reconstruction requirements. We propose the use of controlled aberrations through wavefront coding to produce point spread functions that can simultaneously code and multiplex the scene in a variety of ways. Apart from light efficiency, we can analytically characterize the system matrix response. We explore combinations of Zernike modes and analyze the corresponding coherence parameter. Simulation results using natively sparse and natural scenes demonstrate the feasibility of using controlled aberrations for compressive imaging.
We have previously introduced a high throughput multiplexing computational spectral imaging device. The device measures scalar projections of pseudo-arbitrary spectral filters at each spatial pixel. This paper discusses simulation and initial experimental progress in performing computational spectral unmixing by taking advantage of the natural sparsity commonly found in the fractional abundances. The simulation results show a lower unmixing error compared to traditional spectral imaging devices. Initial experimental results demonstrate the ability to directly perform spectral unmixing with less error than multiplexing alone.
To support the statistical analysis of x-ray threat detection, we developed a very high-throughput x-ray modeling framework based upon GPU technologies and have created three different versions focusing on transmission, scatter, and phase. The simulation of transmission imaging is based on a deterministic photo-absorption approach. This initial transmission approach is then extended to include scatter effects that are computed via the Born approximation. For phase, we modify the transmission framework to propagate complex ray amplitudes rather than radiometric quantities. The highly-optimized NVIDIA OptiX API is used to implement the required ray-tracing in all frameworks, greatly speeding up code execution. In addition, we address volumetric modeling of objects via a hierarchical representation structure of triangle-mesh-based surface descriptions. We show that the x-ray transmission and phase images of complex 3D models can be simulated within seconds on a desktop computer, while scatter images take approximately 30-60 minutes as a result of the significantly greater computational complexity.
The monochromatic single frame pixel count of a camera is limited by diffraction to the space-bandwidth product, roughly the aperture area divided by the square of the wavelength. We have recently shown that it is possible to approach this limit using multiscale lenses for cameras with space bandwidth product between 1 and 100 gigapixels. When color, polarization, coherence and time are included in the image data cube, camera information capacity may exceed 1 petapixel/second. This talk reviews progress in the construction of DARPA AWARE gigapixel cameras and describes compressive measurement strategies that may be used in combination with multiscale systems to push camera capacity to near physical limits.
Traditional approaches to persistent surveillance generate prodigious amounts of data, stressing storage, communication,
and analysis systems. As such, they are well suited for compressed sensing (CS) concepts. Existing
demonstrations of compressive target tracking have utilized time-sequences of random patterns, an approach
that is sub-optimal for real world dynamic scenes. We have been investigating an alternative architecture that
we term SCOUT-the Static Computational Optical Undersampled Tracker-which uses a pair of static masks
and a defocused detector to acquire a small number of measurements in parallel. We will report on our working
prototypes that have demonstrated successful target tracking at 16x compression.
The DARPA MOSAIC program applies multiscale optical design (shared objective lens and parallel array of microcameras)
to the acquisition of high pixel count images. Interestingly, these images present as many challenges
as opportunities. The imagery is acquired over many slightly overlapping fields with diverse focal, exposure and
temporal parameters. Estimation of a consensus image, display of imagery at human-comprehensible resolutions,
automated anomaly detection to guide viewer attention, and power management in a distributed electronic environment
are just a few of the novel challenges that arise. This talk describes some of these challenges and
presents progress to date.
In this paper, a novel color space transform is presented. It is an adaptive transform based on the application of
independent component analysis to the RGB data of an entire color image. The result is a linear and reversible
color space transform that provides three new coordinate axes where the projected data is as much as statistically
independent as possible, and therefore highly uncorrelated. Compared to many non-linear color space transforms
such as the HSV or CIE-Lab, the proposed one has the advantage of being a linear transform from the RGB
color space, much like the XYZ or YIQ. However, its adaptiveness has the drawback of needing an estimate of
the transform matrix for each image, which is sometimes computationally expensive for larger images due to the
common iterative nature of the independent component analysis implementations. Then, an image subsampling
method is also proposed to enhance the novel color space transform speed, efficiency and robustness. The new
color space is used for a large set of test color images, and it is compared to traditional color space transforms,
where we can clearly visualize its vast potential as a promising tool for segmentation purposes for example.
The Gemini telescopes were designed to be infrared-optimized. Among the features specified for optimal performance is the use of silver-based coatings on the mirrors. The feasibility study contracted by Gemini in 1994-1995 provided both techniques and recipes to apply these high-reflectivity and low-emissivity films. All this effort is now being implemented in our coating plants. At the time of the study, sputtering experiments showed that a reflectivity of 99.1% at 10μm was achievable. We have now produced bare and protected silver sputtered films in our coating plants and conducted environmental testing, both accelerated and in real-life conditions, to assess the durability. We have also already applied, for the first time ever, protected-silver coatings on the main optical elements (M1, M2 and M3) of an 8-m telescope. We report here the progress to date, the performance of the films, and our long-term plans for mirror coatings and maintenance.
The non-uniform response in infrared focal plane array (IRFPA)
detectors produces corrupted images with a fixed-pattern noise. In
this paper we present an enhanced adaptive scene-based
non-uniformity correction (NUC) technique. The method
simultaneously estimates detector's parameters and performs the
non-uniformity compensation using a neural network approach. In
addition, the proposed method doesn't make any assumption on the
kind or amount of non-uniformity presented on the raw data. The
strength and robustness of the proposed method relies in avoiding
the presence of ghosting artifacts through the use of optimization
techniques in the parameter estimation learning process, such as:
momentum, regularization, and adaptive learning rate. The proposed
method has been tested with video sequences of simulated and real
infrared data taken with an InSb IRFPA, reaching high correction
levels, reducing the fixed pattern noise, decreasing the ghosting,
and obtaining an effective frame by frame adaptive estimation of
each detector's gain and offset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.