Adaptive optics (AO) is crucial for extreme Large Telescopes (ELTs), and its core of operation lies in the use of wavefront sensors. Although the Shack Hartmann and Pyramid Wavefront Sensors are more common, the axicon wavefront sensor (AxWFS) is a less-explored alternative, where the light is projected onto the detector over a doughnut-shape area. This study introduces a groundbreaking enhancement, employing a state-of-the-art deep neural network to perform wavefront estimation from the intensity changes within the ring produced by the axicon under different turbulence conditions, without requiring any optical modulation.
The new generation of extremely large telescope (ELT) introduces many challenges in optics and engineering. A key challenge is the development of an adaptive optics system able to handle elongated laser guide star (ELGS). Classic wavefront sensor (WFS), such as the shack-hartmann wavefront sensor (SHWFS) or pyramidal wavefront sensor (PyWFS), are not able to readily handle elongated stars, which gets worse when the atmospheric turbulence becomes stronger. In this work, we present a novel complex field wavefront sensor (CFWFS) that can reconstruct the phase and amplitude of the extended bodies at the image plane, and then it is able to recover the turbulent phase at the pupil plane. The proposed WFS scheme uses a four times faster parallel phase retrieval algorithm with only eight designed coded aperture (DCA) that is designed using sphere packing coded apertures (SPCA). We present a collection of encouraging preliminary simulation results.
The rise of extreme Large Telescopes (ELTs) poses challenges for high-resolution phase map reconstruction. Despite the pyramid wavefront sensor (PyWFS) promise, its inherent non-linearity is limiting. This study proposes techniques to enhance the non-modulated PyWFS linearity through deep learning, comparing convolutional Neural Networks (CNNs) models (Xception, WFNet, ConvNext) with the transformer model Global Context Vision Transformers (GCViT). Results favor transformers, highlighting CNN limitations near pupil borders. Experimental validation on the PULPOS optical bench underscores the GCViT robustness. Trained solely on simulated data under varied SNR and D/r0 conditions, our approach enables to accurately close the AO loop in a real system and leave behind the reconstruction paradigm based on the interaction matrix. We demonstrate the high performance of the GCViT in closed loop obtaining a Strehl ratio over 0.6 for strong turbulence and nearly 0.95 for weak turbulence on the PULPOS optical bench.
Any beam that propagates through optical turbulence will experience distortions in both its amplitude and phase, leading to various effects such as beam wandering, beam spreading, and irradiance fluctuations. Reconstructing the complete field of a perturbed beam is a challenging task due to the dynamic nature of these effects. Interferometric wavefront reconstruction techniques—such as those based on holography—are commonly used but are hindered by their sensitivity to environmental disturbances and alignment errors. However, new complex phase retrieval methods based on propagation equations have emerged, which do not require prior knowledge of the beam to be reconstructed and are suitable for amplitude or phase objects, or both. We propose an experimental implementation of a complex phase retrieval technique for characterizing Gaussian beams propagating through optical turbulence, using binary amplitude modulation with a digital micro-mirror device (DMD). This approach is ideal for dynamic applications and has enabled us to achieve experimental high-speed complex wavefront reconstruction of optical beams through controlled real turbulence. This experiment corresponds to the initial step in our research focused on gaining a deeper understanding of optical turbulence from an experimental perspective.
We present the design and implementation of an adaptive optics test bench recently built at the School of Electrical Engineering of the Pontificia Universidad Católica de Valparaíso in Chile. The flexible design of the PULPOS bench incorporates state-of-the-art, high-speed spatial light modulators for atmospheric turbulence emulation and wavefront correction, a deformable mirror for modulation, and a variety of wavefront sensors such as a pyramid wavefront sensor. PULPOS serves as a platform for research on adaptive optics and wavefront reconstruction using artificial intelligence techniques, as well as for educational purposes.
In this work, we evaluate a especially crafted deep convolutional neural network to provide with estimations of the wavefront aberration modes directly from pyramidal wavefront sensor (PyWFS) images. Overall, the use of deep neural networks allow to improve the estimation performance as well as the operational range of the PyWFS, especially when considering cases of strong turbulence or bad seeing ratios D0/r0. Our preliminary results provide with evidence that by using neural nets, instead of the classic linear estimation methods, we can obtain a low modulation sensitivity response while extending the linearity range of the PyWFS, reducing the residual variance by a factor of 1.6 when dealing with a r0 as low as a few centimeters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.