Neuromorphic cameras are capable of processing large amounts of information through asynchronously recording changes in photon levels across every pixel. Due to their recent insertion into the commercial market, research that characterizes these types of cameras is just emerging. Determining sensor capabilities outside a laboratory environment allows for understanding future applications of this technology. An experiment was made with the purpose of determining if the camera could detect laser scatter within the atmosphere and determine information about the laser. Experimentation in real-world environments observed that the camera can distinguish laser scatter with environmental backdrops at varying distances, determine the repetition frequency of the laser, and draw preliminary angle determination data.
Imaging through deep turbulence is a hard and unsolved problem. There have been recent advances toward sensing and correcting moderate turbulence using digital holography (DH). With DH, we use optical heterodyne detection to sense the amplitude and phase of the light reflected from an object. This phase information allows us to digitally back propagate the measured field to estimate and correct distributed-volume aberrations. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using multi-shot DH data (i.e., multiple holographic measurements). Using simulation, we showed the ability to correct deep-turbulence effects, loosely characterized by Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. In this work, we demonstrate the validity of our method using laboratory measurements. Our experiments utilized a combination of multiple calibrated Kolmogorov phase screens along the propagation path to emulate distributed-volume turbulence. This controlled laboratory setup allowed us to demonstrate our algorithm’s performance in deep turbulence conditions using real data.
Recently, we proposed a deep-learning (DL) -based method for solving coherent imaging inverse problems, known as coherent plug and play (CPnP). CPnP is a regularized inversion framework that works with coherent imaging data corrupted by phase errors. The algorithm jointly produces a focused and speckle-free image and an estimate of the phase errors. The algorithm combines physics-based propagation models with image models learned with DL and produces higher-quality estimates when compared to other non-DL methods. Previously, we were only able to demonstrate CPnP using simulated data. In this work, we design a coherent imaging test bed to validate CPnP using real data. We devise a method to obtain truth data for both the images and the phase errors. This allows us to quantify performance and compare different algorithms. Our results validate the improved performance of CPnP when compared to other existing methods.
Imaging through deep-atmospheric turbulence is a challenging and unsolved problem. However, digital holography (DH) has recently demonstrated the potential for sensing and digitally correcting moderate turbulence. DH uses coherent illumination and coherent detection to sense the amplitude and phase of light reflected off of an object. By obtaining the phase information, we can digitally propagate the measured field to points along an optical path in order to estimate and correct for the distributed-volume aberrations. This so-called multi-plane correction is critical for overcoming the limitations posed by moderate and deep atmospheric turbulence. Here we loosely define deep turbulence conditions to be those with Rytov numbers greater than 0.75 and isoplanatic angles near the diffraction limited viewing angle. Furthermore, we define moderate turbulence conditions to be those with Rytov numbers between 0.1 and 0.75 and with isoplanatic angles at least a few times larger than the diffraction-limited viewing angle. Recently, we developed a model-based iterative reconstruction (MBIR) algorithm for sensing and correcting atmospheric turbulence using single-shot DH data (i.e., a single holographic measurement). This approach uniquely demonstrated the ability to correct distributed-volume turbulence in the moderate turbulence regime using only single-shot data. While the DH-MBIR algorithm pushed the performance limits for single-shot data, it fails in deep turbulence conditions. In this work, we modify the DH-MBIR algorithm for use with multi-shot data and explore how increasing the number of measurements extends our capability to sense and correct imagery in deep turbulence conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.