Scientists at the Starfire Optical Range (SOR) have been researching Rayleigh beacons and mesospheric sodium beacons for adaptive optics (AO) for nearly three decades. We developed four different sodium-wavelength lasers, all of which were based on diode-pumped, sum-frequency Nd:YAG oscillators. In 2016, we combined light from two commercial 22-watt, sodium-wavelength lasers to form a single beacon. These commercial lasers, which use resonant-frequency doubling of light from a Raman fiber-amplifier, were built by MPB Communications and Toptica Projects. In 2019, we started to develop and procure a 75-watt sodium-wavelength laser to enable better correction of turbulence in poor seeing. In conjunction with Toptica Projects and European astronomers, we have increased the return flux from sodium beacons by shifting or chirping the laser wavelength to compensate for recoil of optically pumped sodium atoms. In addition, we have demonstrated a single-sideband D2b re-pumper. In this talk, we review the development our new sodium beacon laser and discuss results from an on-sky test in 2023 that demonstrates the improvement in beacon brightness by using these techniques.
In a previous paper, the authors presented a benchtop demonstrator for a stereo scintillation detection and ranging (SCIDAR) system at the Air Force Research Lab’s Starfire Optical Range. The stereo SCIDAR setup and reconstruction algorithms from this effort accurately characterized the seven-layer atmosphere generated by the atmospheric simulation and adaptive optics laboratory testbed’s (ASALT’s) multi-conjugate adaptive optics (MCAO) bench. This paper details the successful transition of that stereo SCIDAR system to the coudé room of a 1.0m telescope. It shares lessons learned, including additional components and alignment techniques for the on-sky system. Finally, it presents automation efforts of the system to support extended on-sky observations.
Estimating the prole of the index of refractive-index structure constant, C2n(z), is of great importance for characterizing the turbulence through which adaptive optical systems operate. Stereo Scintillation Detection and Ranging (SCIDAR) is one of the well developed techniques for making such a prole using light from binary stars. The Air Force Research Laboratory's Starre Optical Range (SOR) is beginning work to add a Stereo SCIDAR capability to the site. This work presents the development and testing of a stereo SCIDAR system in the Atmospheric Simulation and Adaptive Optics Laboratory Testbed (ASALT) at SOR. The stereo SCIDAR system was constructed on the ASALT lab's Multiconjugate Adaptive Optics (MCAO) bench, which features an enhanced atmospheric turbulence simulator (ATS) that can use up to 10 phase screens to test the capabilities of the stereo SCIDAR system in profiling distributed turbulence under a wide range of conditions.
We report on a test bed to compare the performance of three different wavefront sensors, the Shack-Hartmann Wavefront Sensor (SHWFS), the Pyramid Wavefront Sensor (PWFS), and the non-linear Curvature Wavefront Sensor (nlCWFS). No single wavefront sensor easily allows for sensing all aspects of atmospheric turbulence. For instance the SHWFS has a large dynamic range and a linear response to input phase aberrations but is not sensitive to low order modes. The PWFS uses the full spatial resolution of the pupil which gives it increased sensitivity to low order modes, however it still treads the line between achieving high dynamic range and high sensitivity. The nlCWFS is the only wavefront sensor designed to sense low and high, spatial frequencies, however this leads to a complex algorithm. We discuss the reconstruction algorithm for each WFS along with simulated comparisons, we present the optical design for the WFS comparison tes tbed, and outline the adaptive optics controls system.
Head-up displays offer ease-of-use and safety advantages over traditional head-down displays when implemented in aircraft and vehicles. Unfortunately, in the traditional head-up display projection method, the size of the image is limited by the size of the projection optics. In many vehicular systems, the size requirements for a large field of view head-up display exceed the space available to allocate for these projection optics. Thus, an alternative approach is needed to present a large field of view image to the user. By using holographic optical elements affixed to waveguides, it becomes possible to reduce the size of the projection system, while producing a comparatively large image. Additionally, modulating the diffraction efficiency of some of the holograms in the system presents an expanded viewing eyebox to the viewer. This presentation will discuss our work to demonstrate a magnified far-field image with an in-line two-dimensional eyebox expansion. It will explore recording geometries and configurations and will conclude by discussing challenges for future implementation.
Head-up displays offer ease-of-use and safety advantages over traditional head-down displays when implemented in aircraft and vehicles. Unfortunately, in the traditional head-up display projection method, the size of the image is limited by the size of the projection optics. In many vehicular systems, the size requirements for a large field of view head-up display exceed the space available to allocate for these projection optics. Thus, an alternative approach is needed to present a large field of view image to the user. By using holographic optical elements affixed to waveguides, it becomes possible to reduce the size of the projection system, while producing a comparatively large image. Additionally, modulating the diffraction efficiency of some of the holograms in the system presents an expanded viewing eyebox to the viewer. This presentation will discuss our work to demonstrate a magnified far-field image with an in-line two-dimensional eyebox expansion. It will explore recording geometries and configurations and will conclude by discussing challenges for future implementation.
Holography can offer unique solutions to the specific problems faced by automotive optical systems. Frequently, when possibilities have been exhausted using refractive and refractive designs, diffraction can come to the rescue by opening a new dimension to explore. Holographic optical elements (HOEs), for example, are thin film optics that can advantageously replace lenses, prisms, or mirrors. Head up display (HUD) and LIDAR for autonomous vehicles are two of the systems where our group have used HOEs to provide original answers to the limitations of classical optic. With HUD, HOEs address the problems of the limited field of view, and small eye box usually found in projection systems. Our approach is to recycle the light multiple times inside a waveguide so the combiner can be as large as the entire windshield. In this system, a hologram is used to inject a small image at one end of a waveguide, and another hologram is used to extract the image several times, providing an expanded eye box. In the case of LIDAR systems, non-mechanical beam scanning based on diffractive spatial light modulator (SLM), are only able to achieve an angular range of few degrees. We used multiplexed volume holograms (VH) to amplify the initial diffraction angle from the SLM to achieve up to 4π steradian coverage in a compact form factor.
We present a technique to record refreshable holographic stereograms continuously. We eliminated the translation stage that shifts the recording beams back and forth and replaced it with an uninterrupted transparent belt holding holographic lenses. The belt is driven along a perimeter, shifting the lens laterally in front of a photorefractive screen without reversing direction. The holographic lenses focus the object beam onto holographic pixels and are permanently recorded in a thin photopolymer. The photopolymer material is flexible enough for the lenses to follow the curvature of the belt when it goes around the tensioning rollers. The hogel data are uploaded sequentially onto a spatial light modulator to form the object beam. The rotation of the belt in one single direction allows for a continuous operation and a much faster recording speed than with a translation stage that needs to reverse direction at the end of its travel span.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.