Presentation + Paper
13 December 2020 Wide-field wavefront sensing with convolutional neural networks and ordinary least squares
David Thomas, Joshua Meyers, Steven M. Kahn
Author Affiliations +
Abstract
We present a two-staged approach to wide-field wavefront sensing and demonstrate its ability to estimate and enhance image quality for the upcoming Rubin Observatory. The first stage makes local wavefront estimates with a convolutional neural network; the second stage uses linear regression to solve for the global optical state. The Rubin Observatory will have a 3.5 degree field of view, highly degenerate optical system, and curvature wavefront sensing system, making it the perfect test case. We trained our model on 600,000 simulated Rubin Observatory intra and extra-focal star images (donuts). It learns to estimate the optics contribution to the wavefront and separate it from a myriad of other contributions. This computationally efficient approach can process 1,000 times the number of donuts as proposed alternatives. This significant increase in bandwidth leads to a richer and more accurate characterization of the evolution of the telescope optics.
Conference Presentation
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
David Thomas, Joshua Meyers, and Steven M. Kahn "Wide-field wavefront sensing with convolutional neural networks and ordinary least squares", Proc. SPIE 11448, Adaptive Optics Systems VII, 114484H (13 December 2020); https://doi.org/10.1117/12.2576020
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Wavefront sensors

Observatories

Stars

Wavefronts

Image enhancement

Image quality

Refraction

Back to Top