This paper explores the efficacy of employing machine learning, specifically an encoder-style convolutional neural network, to estimate the magnitude of an optical-phase discontinuity (|Δϕ|) that results in an aberrated, farfield irradiance pattern. The model receives a single 32×32 normalized irradiance pattern image and returns the estimated |Δϕ|. We train and validate the model using simulated data with varying values of Δϕ (from 0 to 2π radians), discontinuity locations within the aperture of the simulated system, and strengths of background noise. In exploring this trade space, we calculate the mean absolute errors of the model to be between 0.0603 and 0.475 radians. We also explore the model’s versatility using varying spot sizes to augment the transfer of this model across various systems where the focal length, aperture diameter, or light wavelength may differ, thereby influencing the number of pixels holding information across each irradiance pattern. Finally, this model is tested on experimentally collected data using a spatial light modulator, resulting in a mean absolute error of 0.909 radians. This research supports the development of a shock-wave-tolerant phase reconstruction algorithm for the Shack–Hartmann wavefront sensor. At large, robust shock-wave-tolerant phase reconstruction algorithms will improve wavefront sensing efforts where shock waves are present.
|