Open Access
23 March 2021 Deep learning for anisoplanatic optical turbulence mitigation in long-range imaging
Matthew A. Hoffmire, Russell C. Hardie, Michael A. Rucci, Richard Van Hook, Barry K. Karch
Author Affiliations +
Abstract

We present a deep learning approach for restoring images degraded by atmospheric optical turbulence. We consider the case of terrestrial imaging over long ranges with a wide field-of-view. This produces an anisoplanatic imaging scenario where turbulence warping and blurring vary spatially across the image. The proposed turbulence mitigation (TM) method assumes that a sequence of short-exposure images is acquired. A block matching (BM) registration algorithm is applied to the observed frames for dewarping, and the resulting images are averaged. A convolutional neural network (CNN) is then employed to perform spatially adaptive restoration. We refer to the proposed TM algorithm as the block matching and CNN (BM-CNN) method. Training the CNN is accomplished using simulated data from a fast turbulence simulation tool capable of producing a large amount of degraded imagery from declared truth images rapidly. Testing is done using independent data simulated with a different well-validated numerical wave-propagation simulator. Our proposed BM-CNN TM method is evaluated in a number of experiments using quantitative metrics. The quantitative analysis is made possible by virtue of having truth imagery from the simulations. A number of restored images are provided for subjective evaluation. We demonstrate that the BM-CNN TM method outperforms the benchmark methods in the scenarios tested.

1.

Introduction

The acquisition of long-range images is often affected by atmospheric optical turbulence. The turbulence degradation is caused by changes in temperature and pressure along the optical path that leads to variations in the index of refraction. The result is a quasi-periodic spatial and temporal blur and warping.14 In the field of astronomy, this has been heavily investigated.3 For astronomy, the field-of-view tends to be narrow, which causes the scene to be affected by the atmosphere evenly across the entirety of the image. In this isoplanatic scenario, one can model the degradation with a single linear shift-invariant (LSI) point spread function (PSF). In this paper, we consider a terrestrial long-range imaging scenario with a relatively wide field-of-view. This leads to anisoplanatic conditions where the degradation varies across the scene. This requires a spatially varying PSF to successfully model the degradation.

There are many different approaches proposed in the literature to address the problem of turbulence mitigation (TM) via post processing of degraded images. Most methods are based on processing a sequence of short-exposure (SE) images.1,5 The reasoning is that SE images can effectively freeze warping distortions temporally. This produces images with minimal motion blur. However, the geometry of these individual frames varies temporally and is most likely incorrect.6,7 On the other hand, averaging the SE frames produces an image that is equivalent to a long-exposure (LE) image. The LE image tends to have the correct geometry, but also has significant amounts of motion blur.1 For these reasons, many TM algorithms follow the steps of dewarping, fusion, and then blind deconvolution.1,815 Dewarping refers to motion compensation of the SE frames so as to reduce the effects of motion blur when fused. The motion compensation is typically followed by some sort of averaging to combine the frames and reduce noise. This fusion also serves to average the spatially varying blur so that the resulting blur can be well modeled by a simpler LSI PSF.1 Finally, the deconvolution step attempts to undo the blurring from the LSI PSF.

The block matching and Wiener Filtering (BM-WF)1 method uses a parametric PSF model and a fixed Wiener filter (WF) for the deconvolution. The parametric model for the PSF takes into account diffraction from the optics, SE blurring, and residual motion blur from imperfect image registration using BM. The parametric PSF used in BM-WF has only two unknown parameters, making it much easier to optimize than traditional blind deconvolution methods. Another popular TM method is bispectral speckle imaging (BSI).810,1623 The BSI method attempts to mitigate turbulence through the estimation of image magnitude and phase separately. The name comes from “high-frequency intensity fluctuations that are referred to as speckles.”9 The BM-WF performed well in the studies reported by Hardie et al.1 and it is well suited to adapting to machine learning. For these reasons, we have chosen to build on the framework of BM-WF here.

Designing, optimizing, and evaluating TM algorithms are complicated by the difficulty in obtaining ground truth imagery. Recently, there have been a number of turbulence simulators for anisoplanatic imaging presented in the literature.2,19,2427 Simulation has the advantage of being able to produce a virtually unlimited amount of degraded images with corresponding ground truth for nearly any scenario. These simulated data can be used for training machine learning methods and for quantitative performance analysis of any TM methods. We believe accurate simulation tools are essential to advance the field of TM. One well-validated turbulence simulator is the numerical wave-propagation anisoplanatic simulator developed by Hardie et al.2 This is the method we shall use for testing the TM algorithms presented in this paper. We consider this type of numerical wave method the “gold standard” for anisoplanatic turbulence simulation. However, it does have a relatively high computational complexity. Thus, we also consider a fast simulation method based on that in Schwartzman et al.27 We have extended Schwartzman’s warping-only simulator to add atmospheric blurring, and we use different tilt correlation statistics as described in Sec. 2. Machine learning TM methods require a large amount of training data. Thus, we believe our fast simulator is an effective choice for this purpose. While we use the fast simulator for training, we shall validate all of our results by generating testing data using the numerical wave-propagation simulator.

The WF used for deconvolution in the BM-WF method is known to be optimum in a mean squared error sense for wide sense stationary (WSS) Gaussian images and noise along with an LSI blur. However, in practice, we know that real images are not generally WSS and the statistics are not necessarily Gaussian. Furthermore, while the fusion process does seek to make the residual blur LSI, it may not be perfectly successful in this regard. For these reasons, we believe that there is room for improvement in the BM-WF approach with respect to the WF. It is our hypothesis that we can replace the WF with a convolutional neural network (CNN) and use deep learning (DL) to obtain improved performance relative to the BM-WF method. We argue that the non-linear CNN method is better equipped to address the non-stationary nature of most images and also to address any spatially varying residual blur that may be present. The availability of the new atmospheric turbulence simulators described above makes it possible to generate a virtually unlimited amount of training data to support successful DL with a CNN in this application. We refer to the new proposed algorithm as the BM and convolutional neural network (BM-CNN) method.

Recently CNNs have been used heavily in image processing for a variety of tasks including image restoration and image super resolution (SR). There have been many studies done that show CNNs are successful in restoring images.2832 While there are many different types of networks that can do SR and restoration,3337 here, we employ one recently proposed by Elwarfalli and Hardie.38 The network is referred to as the Fusion of Interpolated Frames Network (FIFNet). This network has been chosen based on good demonstrated SR performance and its lightweight architecture, i.e., it has relatively few convolution layers and can be trained from scratch rather quickly. Large networks can take days or even weeks to train from scratch. FIFNet can be trained in a matter of hours. The larger networks also often require transfer learning where pretrained networks are imported and refined with further training. Such transfer learning is not necessary for FIFNet.

To the best of our knowledge, there is only one prior study in the literature that has been done using CNNs to perform TM.39 In that work, they show that a CNN can restore images affected by turbulence. However, their experimental study uses a simple simulator with images degraded by only nine different PSFs. Furthermore, they do not employ any image registration in their restoration method. In contrast, here, we use a full anisoplanatic simulator2 for validation, and a modified version of the Schwartzman’s warping simulator for training.27 Our BM-CNN method also gains improved performance by incorporating BM registration prior to CNN processing. We study the performance of our BM-CNN method on four different turbulence levels at two noise levels. We shall show there that the BM-CNN consistently outperforms the BM-WF method and other benchmark methods in the scenarios tested here. We find the boost in performance most pronounced in light to moderate turbulence and moderate to high noise levels.

The rest of the paper is organized as follows. Section 2 provides a description of turbulence characterization statistics. It also explains the two turbulence simulation methods employed here. Section 3 introduces the TM methods studied in this paper. Specifically, we introduce our proposed BM-CNN method and compare and contrast it with the BM-WF benchmark method. Details of the CNN architecture and training processes are also provided in Sec. 3. Section 4 presents the experimental results that includes a detailed quantitative analysis. Finally, conclusions are articulated in Sec. 5.

2.

Turbulence Simulation

2.1.

Turbulence Characterization

Atmospheric optical turbulence impacts the imaging process by introducing random pockets of air with varying indices of refraction along the optical path. Variations in the index of refraction are often modeled with a refractive index structure function. This is given as

Eq. (1)

Dn(r)=[n(x+r)n(x)]2=Cn2r2/3,
where x and r are three-dimensional spatial coordinate vectors, r=|r|, n(·) is the index of refraction, and · represents an ensemble mean operator. The parameter Cn2 is the refractive index structure parameter and it has units of m2/3. Typical values of Cn2 tend to range from 1×1013 to 1×1017  m2/3.40 When this quantity varies along the optical path, it may be expressed as a function, Cn2(z), and here we define z as the distance from the source.

Another key turbulence statistic that takes the full optical path into account is called the Fried parameter and is denoted r0. The Fried parameter is also known as the atmospheric coherence diameter.3,40 This parameter can be expressed in terms of the refractive index structure function as

Eq. (2)

r0=[0.423(2πλ)2z=0z=LCn2(z)(zL)5/3dz]3/5,
where λ is the wavelength and the path length is given by the variable L. The Fried parameter is the key parameter that governs the atmospheric optical transfer function (OTF), the PSF, and point source tilt variance.41 It is worth noting that a small r0 relative to the camera aperture indicates a high level of atmospheric turbulence.

Warping from turbulence can be characterized by the point source angular tilt variance. The two-axis Zernike tilt (Z-tilt) variance in units of radians squared is given by Tyson42 in terms of r0 as

Eq. (3)

TZ2=.3641(Dr0)5/3(λD)2,
where D is the aperture of the optics. As turbulence strength increases, we see a smaller r0 and increased tilt variance. The increase in tilt variance corresponds into increased warping in the observed imagery. There is also an increase in blurring as governed by the OTF.

Another relevant statistic is the isoplanatic angle.3 Point sources separated by less than this angle will have similar PSFs and a mean wave function phase difference at the aperture of <1  rad.40,43 The isoplanatic angle can also be expressed as a weighted integral of Cn2(z), yielding40,43

Eq. (4)

θ0=[2.91k2L5/3z=0z=LCn2(z)(1zL)5/3dz]3/5.
Generally speaking, as the turbulence level increases, the isoplanatic patch size decreases. This means the warping is less correlated spatially over a given separation distance. This tends to make it harder to perform image registration.

The optical system parameters used for the analysis and results in this paper are listed in Table 1. The specific turbulence statistics are listed in Table 2. All of the experimental results presented in Sec. 4 are based on these values. We consider a visible system operating at relatively long range (7 km) at four different turbulence levels. We further assume that the sensor is Nyquist sampled relative to the diffraction-limited optical cutoff frequency. It is worth noting that some prior work has been done addressing the problems of turbulence and undersampling jointly using an approach based on the BM-WF method.41 We forgo that extra complication in this study, as the emphasis here is on using machine learning for TM. The optical and turbulence parameters have been selected to closely match those in the paper2 that describes the anisoplanatic simulator employed here. The rationale for this choice is that the simulator is thoroughly validated using those parameters.2 Thus, we have a high confidence that the data from the anisoplanatic simulator are realistic and consistent with key theoretical metrics.

Table 1

Optical parameters used in simulation results.

ParameterValue
ApertureD=0.2034 (m)
Focal lengthl=1.2000 (m)
F-number5.8997
Wavelengthλ=0.5250 (μm)
Object distanceL=7000 (m)
Nyquist pixel spacingδ=1.5487 (μm)
Cutoff frequencyρc=322.8571 (cycles/mm)

Table 2

Turbulence parameters used in the simulation results.

Turbulence degradation
ParameterLevel 1Level 2Level 3Level 4
Cn2×1015 (m2/3)0.03070.12270.49091.9638
Theoretical r0 (m)0.38630.16810.07320.0319
Theoretical D/r0 (unitless)0.52651.20972.77896.3843
Isoplanatic angle (pixels)13.44545.85162.54721.1087
RMS tilt (pixels)0.5000124

The OTF for an imaging system in turbulence is well modeled as

Eq. (5)

H(ρ)=Hatm(ρ)Hdif(ρ),
where ρ=u2+v2 and u and v are the spatial frequencies in units of cycles per unit distance. The average atmospheric OTF component is commonly expressed as3

Eq. (6)

Hatm(ρ)=exp{3.44(λlρr0)5/3[1α(λlρD)1/3]},
where l is the camera focal length and D is the aperture diameter. The parameter α here serves as a switch where the LE atmospheric OTF is generated by Eq. (6) when α=0. The ensemble-average SE OTF is given by Eq. (6) when α=1. The SE OTF is effectively a shift corrected OTF. When partial tilt correction is achieved by image registration, the parameter α(0,1) can be used as a tilt correction factor1 to produce an OTF that accounts for residual tilt variance motion blurring.

For an imaging system with a circular aperture the diffraction-limited OTF is given as44

Eq. (7)

Hdif(ρ)={2π[cos1(ρρc)ρρc1(ρρc)2]ρρc0otherwise.
Here, the variable ρc=1/(λfn) is the optical cutoff frequency with fn=l/D being the f-number of the optics. It is worth noting that the theoretical LE PSF is the inverse Fourier transform of Eq. (5) with α=0. Similarly, the theoretical average SE PSF is the inverse Fourier transform of Eq. (5) with α=1.

Examples of OTFs for the optical parameters and turbulence levels in Tables 1 and 2 are shown in Figs. 1 and 2. The LE OTFs are in Fig. 1 and the ensemble-average SE OTFs are in Fig. 2. Note that as Cn2 increases, the OTF becomes more low pass in nature and has more blurring. Also, the LE OTFs have more blurring than the corresponding SE OTFs for the same turbulence level. It is worth noting that additional OTF components could be added to the model. These components could include defocus, optical aberrations, and atmospheric aerosol contributions. In future work, the addition of aerosol OTFs4547 may be a particularly interesting study for terrestrial applications.

Fig. 1

LE OTFs for the parameters in Tables 1 and 2.

OE_60_3_033103_f001.png

Fig. 2

SE OTFs for the parameters in Tables 1 and 2.

OE_60_3_033103_f002.png

2.2.

Anisoplanatic Numerical Wave-Propagation Simulation

The anisoplanatic simulator used here is a numerical wave-propagation method that produces realistic anisoplanatic turbulence degradation. Extensive validation has been performed with this simulator.2 Thus, we treat this type of numerical wave simulation method as the gold standard for anisoplanatic data generation. All of the wave-propagation simulation parameters used in this paper are the same as those reported in Hardie et al.2 The wave-propagation simulator takes as input the optical parameters of the camera system and the Cn2(z) profile. It then produces and applies a distinct PSF for every pixel in an input image to produce a degraded frame. It is worth noting that this simulator does not assume an LSI PSF.

The PSFs are formed by propagating point sources from the object plane to the pupil plane through a series of extended phase screens. The basic propagation geometry is shown in Fig. 3. It is worth noting that the dimensions and number of phase screens shown are just for illustrative purposes and are not necessarily the same as those used in Table 1. The left side of the figure corresponds to the object plane at z=0. The smallest square on the right represents the pupil plane, and the circle within represents the camera aperture. The red boxes represent extended phase screens. We employ 10 screens in our simulations here.2 The statistics of the phase screens are based on a modified von Kármán phase power spectral density (PSD).2,40 This includes the Kolmogorov PSD as a special case. For each point source in the object plane, the portions of the phase screens along the optical path are extracted and used for numerical wave propagation. These cropped portions are shown as green rectangles for one point source, and blue for another.

Fig. 3

Anisoplanatic numerical wave-propagation simulation geometry. Point sources in the object plane (z=0) are propagated using numerical wave propagation to the pupil plane of the camera through a series of phase screens. A unique PSF is formed in the focal plane of the simulated camera for each point source in the object plane.

OE_60_3_033103_f003.png

Once the PSFs are generated, they are applied in a spatially varying convolution process to produce a degraded image. The centroids of the PSFs will vary and this creates warping. The width of PSFs themselves creates blurring. Because the PSFs vary spatially, the simulator produces anisoplantic warp and blur. Note that realistic spatial correlation is naturally created because point sources will propagate through overlapping regions of the phase screens as shown in Fig. 3. The closer the point sources, the more overlap the two propagating waves have within the turbulent medium and the more correlation there will be between the resulting PSFs.

2.3.

Fast Warping Simulation

The fast warping simulator used here is based on that presented by Schwartzman et al.27 This simulator works by creating 2D arrays of white Gaussian random noise. The noise is filtered with specially designed LSI filters to transform the white noise into random fields with realistic spatial tilt correlation statistics. These shifts are applied to the truth image through interpolation creating realistic geometric distortion.27

There are two important novel aspects of our fast warping simulator that differ from the one proposed by Schwartzman et al.27 First, in addition to the warping we also introduce an LSI PSF blurring using the average SE PSF from Eqs. (5) and (6) with α=1. The other difference is that we employ the tilt statistics derived by Bose-Pillai.48 These statistics have been shown to provide excellent agreement with the anisoplanatic simulator statistics.2

It should be noted that the warping generated by our fast simulator is anisoplanatic, but the blurring is isoplanatic. Based on the small isoplanatic angles in Table 2, it is clear that this is not as accurate as the anisoplanatic simulator with respect to individual frames. However, when considering the average of many frames, the spatially varying PSFs will effectively combine to be nearly LSI. Thus, after frame averaging, our fast simulator and the anisoplanatic simulator will tend to be very comparable. We use this to our advantage to rapidly generate suitable training data for our proposed machine learning TM algorithm using the fast simulator. However, if sufficient computational resources are available, one could use the full anisoplanatic simulator to generate training data.

2.4.

Simulator Comparison

While the numerical wave anisoplanatic simulator is considered to be very accurate, it does come with a relatively high computational burden. The warping simulator on the other hand, with LSI blurring, is much faster. This makes it much more practical for generating the vast quantities of data necessary for DL training. To compare the speeds of the warping simulator and the anisoplanatic simulator, 30 frames of size 256×256 were generated with both simulators using MATLAB™ and the processing times on the same workstation were recorded. The anisoplanatic wave-propagation simulator took 289 s, or nearly 5 min, to complete this task. The warping simulator did the same in just over 3 s. This is 2 orders of magnitude speed enhancement. This timing study has been conduced using a workstation with Intel(R) Core(TM) i9-9980XE CPU @ 3 GHz with 128 GB of RAM and two NVIDIA Titan RTX GPUs. Considering that DL training requires thousands of images, some of which are much larger than 256×256, one can clearly see the practical benefit of the fast simulator.

However, speed is not the only thing to consider. The fast simulator is only useful if it generates images that are statistically representative of the real image to be processed. In this paper, we focus on TM methods that process an average of registered SE frames. Thus, the simple LSI PSF used by the fast simulator is not problematic since the average of the anisoplanatic images should be similar. This similarity allows use to train of the proposed BM-CNN TM method using the fast warping simulator. However, for thoroughness and rigor we test using both the fast simulator and the anisoplanatic wave-propagation simulator. As will be seen in Sec. 4, the results show that training with the fast simulator is highly effective.

An example simulated SE frame using both the wave-propagation simulation and the fast warping simulator is shown in Fig. 4. The full ideal image is shown in Fig. 4(a). A 200×200 region of interest is shown in Fig. 4(b). The numerical wave-propagation simulation output for level 3 turbulence is shown in Fig. 4(c). The same image with the optical flow arrows from turbulence warping is shown in Fig. 4(d). Similar results using the fast warping simulator with average SE PSF blur are shown in Figs. 4(e) and 4(f). One can see that the degraded images using the two different simulation methods have a similar level of blur and warping.

Fig. 4

Turbulence simulator output comparison for level 3 turbulence and no noise. (a) Full size ideal image; (b) 200×200 region of interest; (c) wave-propagation simulation output; (d) wave-propagation output with optical flow; (e) fast warping simulator output; (f) fast warping simulator output with optical flow.

OE_60_3_033103_f004.png

3.

Turbulence Mitigation

In this section, we define the TM methods studied here. For the reader’s convenience, all the methods are listed along with their acronym in Table 3. The raw SE image from the simulator plus noise is referred to simply as SE. The LE image formed as the average of the SE frames is listed as LE in Table 3. The method of forming the average of images registered using BM is listed as BM-AVG. The next benchmark method listed in Table 3 is the LE followed by restoration using a WF (LE-WF). The theoretical LE PSF is used for the WF and the noise-to-signal (NSR) ratio parameter is optimized on the training data.1 The last two methods listed in Table 3 combine BM fusion with some type of image restoration. These two methods are described in more detail in the following sections.

Table 3

TM methods used and their acronyms.

TM methodAcronym
Short exposureSE
Long exposureLE
Block matching and averagingBM-AVG
Long-exposure and Wiener filterLE-WF
Block matching and Wiener filterBM-WF
Block matching and convolutional neural networkBM-CNN

3.1.

Block Matching and Wiener Filtering

The BM-WF method was proposed and described in detail by Hardie et al in Ref. 1. A block diagram showing the key steps of this method is provided in Fig. 5. The SE frames are acquired and then averaged to form a prototype image with approximately the correct geometry. BM is then used to dewarp the SE images to the greatest extent possible. A B×B block size is used with B=15 here. The search window is S×S, where S=7,9,11 and 13 for the four turbulence levels in Table 2, respectively. The partially dewarped SE images are then averaged. Finally, the average image is processed with a WF. The WF has two parameters, the NSR and the parameter α from Eq. (6) that represents the amount of tilt correction provided by the BM registration step.1 Here, we optimize these two parameters using the training data to make a fair comparison with the DL method that has access to training data.

Fig. 5

BM-WF TM block diagram.1

OE_60_3_033103_f005.png

3.2.

Block Matching and Convolutional Neural Network

In the proposed BM-CNN method, we use a CNN instead of the WF to perform restoration. A block diagram detailing the process can be seen in Fig. 6. The CNN is non-linear in nature and we believe it is better equipped to address the non-stationary nature of the imagery that results from the BM fusion step.

Fig. 6

Proposed BM-CNN TM block diagram. It is worth noting that we have replaced the WF in the BM-WF method with a CNN to improve performance.

OE_60_3_033103_f006.png

As previously stated in Sec. 1, our BM-CNN uses the FIFNet architecture. In particular, 10 convolution (Conv) + rectified linear unit (ReLU) activation layers are employed. The first five convolution layers use a filter size of 3×3, and the last five use a filter size of 5×5. A diagram of the network architecture is shown in Fig. 7. A listing and description of all of the layers is provided in Table 4. As can be seen in Table 4, the total number of layers is 27. By current standards, this represents a relatively small network. This allows the network to be trained from scratch in a relatively short time. The network is implemented in MATLAB™ using the Deep Learning Toolbox™.

Fig. 7

FIFNet CNN network architecture used for the proposed BM-CNN TM method.

OE_60_3_033103_f007.png

Table 4

Description of the FIFNet layers38 used for the proposed BM-CNN TM method. The spatial dimensions of the activations are for training using 61×61 image patches.

LayersTypeSpatial kernelActivations
1Input61×61×1
2Splitting61×61×1
3, 42D conv+ReLU3×359×59×64
5, 62D conv+ReLU3×357×57×64
7, 82D conv+ReLU3×355×55×64
9, 102D conv+ReLU3×353×53×64
11, 122D conv+ReLU3×351×51×64
13, 142D Conv+ReLU5×547×47×64
15, 162D conv+ReLU5×543×43×64
17, 182D conv+ReLU5×539×39×64
19, 202D crop+ReLU39×39×1
21Concatenate39×39×65
22, 232D conv+ReLU5×535×35×65
242D ccrop35×35×1
25Concatenate35×35×66
262D conv5×531×31×1
27Regression

Training and testing of the network are done here using the DIV2K database.49 This dataset has become popular for testing image restoration and image SR methods. It contains 800 high-resolution images for training and 100 for testing. The original images are 24-bit RGB images with 2K resolution. All of the input images have been converted to grayscale with a floating point dynamic range of 0 to 1. They have also been reduced in size by a scaling factor of 0.25.

The basic training process is shown in Fig. 8. The undegraded training data serve as the “truth” images. These images are fed into the fast warping simulator to produce the degraded images. BM is applied to these degraded images and then the registered images are then averaged and fed into the CNN. During training iterations, image patches of size 61×61 are extracted from random positions in each training image. The corresponding truth patches of size 31×31 are also extracted for regression. The convolutional layers of the CNN do not use zero padding, so the processed image patches get smaller with each layer as shown in Table 4. This is why the truth patch is smaller than the input patch. The advantage of this is that it allows for a smaller network and avoids extrapolation errors on the patch borders.38

Fig. 8

BM-CNN training process block diagram.

OE_60_3_033103_f008.png

CNN updates during training are done using stochastic gradient descent with momentum optimization. Sixty four different random patches are extracted from each training image. We use a mini-batch size of 64 so that each iteration corresponds to one training image. All training is done with a total of 100 epochs. We set the initial learning rate to 0.1 and reduced this by a factor of 0.1 after every 10 epochs. Using the same workstation described in Sec. 2.4, the total training time for this network is 100  min.

4.

Experimental Results

The experimental results have all been conducted using the optical parameters from Table 1 and the turbulence degradation parameters from Table 2. The four different turbulence levels have all been trained using the fast warping simulator for noise standard deviation of 10 and 30, relative to an 8-bit grayscale image range. The results are all formed using image sequences of 30 SE input frames for each turbulence and noise scenario generated from each of the original DIV2K 800 ideal truth images. For testing, the 100 DIV2K validation images are used. Here, we generate the 30 frame input sequences using both the fast warping simulator and the anisoplanatic simulator.

4.1.

Quantitative Results

To quantitatively compare results of the all TM methods, the metrics of peak signal-to-noise ratio (PSNR) and structural similarity index measure of image quality (SSIM)50 are used. In both metrics, the higher the number the better. For SSIM the numbers range from 0 to 1, so getting numbers close to 1 is desired. The quantitative error metric values are averaged over the 100 validation images with ignore border of 5 pixels on all sides to exclude any border artifacts.

The PSNR results are provided in Tables 5Table 6Table 78 for the four turbulence levels in Table 2. The corresponding SSIM results are in Tables 9Table 10Table 1112. Data generated from the fast warping simulator are indicated with the designation “Warp” in the results tables. Data from the anisoplanatic numerical wave-propagation simulator are designated “Aniso.” Bold entries in the tables correspond to the best results. For the reader’s convenience, the quantitative PSNR values are also plotted in Figs. 9 and 10 for noise standard deviations of 10 and 30, respectively. The corresponding SSIM values are plotted in Figs. 11 and 12. The plotted data are for testing on the numerical wave-propagation anisoplanatic simulator.

Table 5

Average PSNR (dB) results for 100 validation sequences with turbulence level 1 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE23.348923.232317.634517.6083
LE26.182725.929325.383425.1803
BM-AVG26.706326.495525.470225.3453
LE-WF29.757729.750027.244227.1366
BM-WF30.376630.327127.224127.2097
BM-CNN31.846631.664028.894828.7915

Table 6

Average PSNR (dB) results for 100 validation sequences with turbulence level 2 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE21.639321.488117.060717.0140
LE24.192323.897423.666323.4118
BM-AVG25.664125.549624.486524.4400
LE-WF27.117526.912725.597325.3992
BM-WF29.210429.353226.343326.4179
BM-CNN30.257730.317527.586127.6535

Table 7

Average PSNR (dB) results for 100 validation sequences with turbulence level 3 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE19.783619.554916.289216.1915
LE21.703221.393121.401621.1164
BM-AVG23.227223.157122.370222.3104
LE-WF23.851623.487523.229122.9434
BM-WF26.242426.510524.130324.2210
BM-CNN26.599226.851324.863324.9602

Table 8

Average PSNR (dB) results for 100 validation sequences with turbulence level 4 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE18.235617.951915.518315.3691
LE19.522719.221219.341119.0539
BM-AVG20.289820.149319.893219.7544
LE-WF21.200120.771220.967020.6208
BM-WF22.413722.362421.406721.3097
BM-CNN22.588922.547321.695921.6129

Table 9

Average SSIM results for 100 validation sequences with turbulence level 1 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE0.54350.55760.24820.2627
LE0.80600.80550.70050.7064
BM-AVG0.82300.82500.68560.6969
LE-WF0.85720.86520.76070.7701
BM-WF0.86300.87170.74440.7585
BM-CNN0.92640.92990.85560.8613

Table 10

Average SSIM results for 100 validation sequences with turbulence level 2 and 30 input frames.

MitigationSimulated data: training/testing
Noise Std = 10Noise Std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE0.45890.47020.20710.2197
LE0.70860.70420.60880.6108
BM-AVG0.78170.78870.63500.6515
LE-WF0.79600.80090.70650.7125
BM-WF0.83950.85420.71810.7373
BM-CNN0.90060.91040.81580.8287

Table 11

Average SSIM results for 100 validation sequences with turbulence level 3 and 30 input frames.

MitigationSimulated data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE0.36470.36970.16220.1706
LE0.55970.54890.47330.4687
BM-AVG0.65340.66440.51600.5296
LE-WF0.67120.66730.60770.6067
BM-WF0.74820.77720.62770.6493
BM-CNN0.79360.81800.70950.7275

Table 12

Average SSIM results for 100 validation sequences with turbulence level 4 and 30 input frames.

MitigationSimulated Data: training/testing
Noise std = 10Noise std = 30
Warp/WarpWarp/AnisoWarp/WarpWarp/Aniso
SE0.29630.29730.13240.1376
LE0.46180.44580.38700.3771
BM-AVG0.49030.48390.39920.3966
LE-WF0.54170.52830.51860.5078
BM-WF0.57880.59110.52280.5251
BM-CNN0.60330.61680.55400.5584

Fig. 9

PSNR (dB) results for the one hundred 30-frame validation sequences for the turbulence levels in Table 2 and a noise standard deviation of 10.

OE_60_3_033103_f009.png

Fig. 10

PSNR (dB) results for the one hundred 30-frame validation sequences for the turbulence levels in Table 2 and a noise standard deviation of 30.

OE_60_3_033103_f010.png

Fig. 11

SSIM results for the one hundred 30-frame validation sequences for the turbulence levels in Table 2 and a noise standard deviation of 10.

OE_60_3_033103_f011.png

Fig. 12

SSIM results for the one hundred 30-frame validation sequences for the turbulence levels in Table 2 and a noise standard deviation of 30.

OE_60_3_033103_f012.png

One can see that the raw SE images score the lowest in the performance metrics. The LE images perform better. This is due to the geometric correction provided by the frame averaging. Averaging the frames after BM with the BM-AVG method provides further improvement. Here we get the benefit of the LE geometric correction but with less turbulence motion blur. The LE-WF gives the next best result as it includes restoration from the WF. Then we have the BM-WF improving upon that, thanks to both the registration and restoration. Finally, we see that the BM-CNN gives the highest quantitative performance values. The relative improvement is most pronounced at lower turbulence levels and higher noise levels.

4.2.

Qualitative Image Results

Image results are shown in Figs. 13Fig. 14Fig. 1516 for subjective evaluation. All of the images are 200×200 regions of interest with the same layout. Image (a) is the truth image and (b) is the raw SE image. The LE and LE-WF are shown in (c) and (d), respectively. The BM-WF is shown in (e), and the the BM-CNN is shown in (f).

Fig. 13

TM results using the anisoplanatic simulator data as testing data (Penguin). These results are for turbulence level 2, 30 input frames, and noise standard deviation of 30. (a) Truth; (b) SE; (c) LE; (d) LE-WF; (e) BM-WF; (f) BM-CNN.

OE_60_3_033103_f013.png

Fig. 14

TM results using the anisoplanatic simulator data as testing data (Train). These results are for turbulence level 2 and 30 input frames, and noise standard deviation of 30. (a) Truth; (b) SE; (c) LE; (d) LE-WF; (e) BM-WF; (f) BM-CNN.

OE_60_3_033103_f014.png

Fig. 15

TM results using the anisoplanatic simulator data as testing data (Penguin). These results are for turbulence level 3, 30 input frames, and noise standard deviation of 10. (a) Truth; (b) SE; (c) LE; (d) LE-WF; (e) BM-WF; (f) BM-CNN.

OE_60_3_033103_f015.png

Fig. 16

TM results using the anisoplanatic simulator data as testing data (Train). These results are for turbulence level 3 and 30 input frames, and noise standard deviation of 10. (a) Truth; (b) SE; (c) LE; (d) LE-WF; (e) BM-WF; (f) BM-CNN.

OE_60_3_033103_f016.png

Figure 13 shows the penguin image for level 2 turbulence noise standard deviation of 30. The high noise level is evident in the raw SE image in Fig. 13(b). The LE image in Fig. 13(c) has reduced noise from averaging 30 frames but is highly blurred. The sharpening effect of the WF in Fig. 13(d) is clear in the LE-WF image. The BM-WF in Fig. 13(e) is sharper than the LE-WF because the of the reduced turbulence motion blurring from the registration. Finally, note that the BM-CNN image in Fig. 13(f) appears to have the same level of sharpness as the BM-WF, but with superior noise suppression. The improved noise suppression is clearly visible in the white region of the penguin’s belly. We attribute this to the ability of the CNN to be non-linear and spatially adaptive. In contrast, the fixed WF is limited to LSI processing. The same scenario is shown in Fig. 14 for an image of a train. Similar relative performance of the methods is observed here as well.

Level 3 turbulence results for the penguin image are shown in Fig. 15 with a lower noise standard deviation of 10. The lower noise is evident by comparing Fig. 15(b) to Fig 13(b). The increase in turbulence blurring is most pronounced when comparing Figs. 15(c) to Fig 13(c). Here, both the BM-WF and BM-CNN provide good results, but again the BM-CNN provides the same level of sharpness with noticeably more noise reduction. The same scenario for the image of a train is shown in Fig. 16 with similar subjective results.

5.

Conclusions

A DL approach to combating the image degradation from atmospheric turbulence is introduced in this paper. In particular, we proposed the BM-CNN TM method that builds on earlier work by some of the current authors with the BM-WF1 method. In our new approach, we have replaced the WF with a CNN and employ DL to train the network. The CNN architecture we use here is based on the FIFNet.38 The FIFNet has a lightweight architecture with relatively few layers. This allows us to train the network from scratch, without the need for transfer learning, in 100  min. The BM-CNN method is one of the first in the literature to apply a CNN to the problem of TM.

In addition to the important innovation of using a CNN for TM, we have also employed two different turbulence simulation methods for the first time. Because of the high data requirements for training, we employ a fast warping simulator for generating training images. The fast simulator extends the method introduced by Schwartzman et al.27 Our novel modifications include the use of different tilt statistics and the introduction of LSI blurring using the average SE PSF, as described in Sec. 2. While training is done exclusively with the fast simulator, testing is performed here using both the fast simulator, and a numerical wave-propagation anisoplanatic simulator.2 We believe numerical wave-propagation remains the gold standard for anisoplanatic turbulence simulation. However, the fast simulator is 100 times faster. The BM-CNN trained on the fast warping simulator gives almost identical performance results on testing data generated by both the warping simulator and the numerical wave-propagation anisoplanatic simulator. We believe this shows that the computationally faster simulation method is well suited for training. We believe it is very significant that we have demonstrated that the fast simulator can be effectively used for training in this application. The use of the fast simulator dramatically increases the speed of training data generation and makes the DL process much more practical in this application.

The proposed BM-CNN is compared to several benchmark methods in Sec. 4. In all of the quantitative performance experiments conducted, the BM-CNN was the best performing method. The BM-WF was the next best performing method. In subjective evaluation of the images, we have observed that the big advantage of the BM-CNN appears to be in how it handles noise. The non-linear and spatially adaptive CNN method outperforms the WF and provides noticeably more noise suppression in the flat areas while providing the same level of sharpness in the dynamic areas. This gives very pleasing visual results that are supported by the favorable performance metrics.

Acknowledgments

The authors would like to thank Joe French, Bruce Wilcoxen, and the project team at Leidos for project management support. This work was supported by the Air Force Research Laboratory (AFRL), Sensors Directorate under contract FA8650-18-F-1710. The contents of this document are approved for public release under case number AFRL 88ABW-2020-3547. The authors declare that there are no conflicts of interest to disclose.

References

1. 

R. C. Hardie et al., “Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis,” Opt. Eng., 56 (7), 071503 (2017). https://doi.org/10.1117/1.OE.56.7.071503 Google Scholar

2. 

R. C. Hardie et al., “Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis,” Opt. Eng., 56 (7), 071502 (2017). https://doi.org/10.1117/1.OE.56.7.071502 Google Scholar

3. 

M. C. Roggemann and B. M. Welsh, “Imaging Through Turbulence,” Laser and Optical Science and Technology, CRC Press(1996). Google Scholar

4. 

R. E. Hufnagel and N. R. Stanley, “Modulation transfer function associated with image transmission through turbulent media,” J. Opt. Soc. Am., 54 52 –61 (1964). https://doi.org/10.1364/JOSA.54.000052 JOSAAH 0030-3941 Google Scholar

5. 

A. W. M. van Eekeren et al., “Turbulence compensation: an overview,” Proc. SPIE, 8355 83550Q (2012). https://doi.org/10.1117/12.918544 PSISDG 0277-786X Google Scholar

6. 

A. Lambert et al., “Experiments in turbulence induced super-resolution in surveillance imagery,” Proc. SPIE, 7126 712612 (2009). https://doi.org/10.1117/12.823245 PSISDG 0277-786X Google Scholar

7. 

Y. Liu, “Mitigation of the effects of turbulence in the imaging path,” University of New South Wales Australian Defence Force Academy, (2016). Google Scholar

8. 

C. L. Matson et al., “Multiframe blind deconvolution and bispectrum processing of atmospherically degraded data: a comparison,” Proc. SPIE, 4792 55 –66 (2002). https://doi.org/10.1117/12.451796 PSISDG 0277-786X Google Scholar

9. 

G. E. Archer, J. P. Bos and M. C. Roggemann, “Reconstruction of long horizontal-path images under anisoplanatic conditions using multiframe blind deconvolution,” Opt. Eng., 52 (8), 083108 (2013). https://doi.org/10.1117/1.OE.52.8.083108 Google Scholar

10. 

G. E. Archer, J. P. Bos and M. C. Roggemann, “Comparison of bispectrum, multiframe blind deconvolution and hybrid bispectrum-multiframe blind deconvolution image reconstruction techniques for anisoplanatic, long horizontal-path imaging,” Opt. Eng., 53 (4), 043109 (2014). https://doi.org/10.1117/1.OE.53.4.043109 Google Scholar

11. 

D. Fraser, G. Thorpe and A. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A, 16 1751 –1758 (1999). https://doi.org/10.1364/JOSAA.16.001751 JOAOD6 0740-3232 Google Scholar

12. 

D. H. Frakes, J. W. Monaco and M. J. T. Smith, “Suppression of atmospheric turbulence in video using an adaptive control grid interpolation approach,” in IEEE Int. Conf. Acoustics, Speech, and Signal Process., 2001. Proc., 1881 –1884 (2001). https://doi.org/10.1109/ICASSP.2001.941311 Google Scholar

13. 

D. Li, R. M. Mersereau and S. Simske, “Atmospheric turbulence-degraded image restoration using principal components analysis,” IEEE Geosci. Remote Sens. Lett., 4 340 –344 (2007). https://doi.org/10.1109/LGRS.2007.895691 Google Scholar

14. 

X. Zhu and P. Milanfar, “Image reconstruction from videos distorted by atmospheric turbulence,” Proc. SPIE, 7543 75430S (2010). https://doi.org/10.1117/12.840127 PSISDG 0277-786X Google Scholar

15. 

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell., 35 157 –170 (2013). https://doi.org/10.1109/TPAMI.2012.82 ITPIDJ 0162-8828 Google Scholar

16. 

T. W. Lawrence et al., “Experimental validation of extended image reconstruction using bispectral speckle interferometry,” Proc. SPIE, 1237 522 –537 (1990). https://doi.org/10.1117/12.19323 PSISDG 0277-786X Google Scholar

17. 

T. W. Lawrence et al., “Extended-image reconstruction through horizontal path turbulence using bispectral speckle interferometry,” Opt. Eng., 31 (3), 627 –636 (1992). https://doi.org/10.1117/12.56083 Google Scholar

18. 

C. J. Carrano, “Speckle imaging over horizontal paths,” Proc. SPIE, 4825 109 –120 (2002). https://doi.org/10.1117/12.453519 Google Scholar

19. 

C. J. Carrano, “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE, 5162 14 –27 (2003). https://doi.org/10.1117/12.508082 PSISDG 0277-786X Google Scholar

20. 

J. P. Bos and M. C. Roggemann, “Mean squared error performance of speckle-imaging using the bispectrum in horizontal imaging applications,” Proc. SPIE, 8056 805603 (2011). https://doi.org/10.1117/12.884093 PSISDG 0277-786X Google Scholar

21. 

J. P. Bos and M. C. Roggeman, “The effect of free parameter estimates on the reconstruction of turbulence corrupted images using the bispectrum,” Proc. SPIE, 8161 816105 (2011). https://doi.org/10.1117/12.893859 Google Scholar

22. 

J. P. Bos and M. C. Roggemann, “Robustness of speckle-imaging techniques applied to horizontal imaging scenarios,” Opt. Eng., 51 (8), 083201 (2012). https://doi.org/10.1117/1.OE.51.8.083201 Google Scholar

23. 

J. P. Bos and M. C. Roggemann, “Blind image quality metrics for optimal speckle image reconstruction in horizontal imaging scenarios,” Opt. Eng., 51 (10), 107003 (2012). https://doi.org/10.1117/1.OE.51.10.107003 Google Scholar

24. 

J. P. Bos and M. C. Roggeman, “Simulation of extended scenes imaged through turbulence over horizontal paths,” Proc. SPIE, 8161 816106 (2011). https://doi.org/10.1117/12.893869 PSISDG 0277-786X Google Scholar

25. 

J. P. Bos and M. C. Roggemann, “Technique for simulating anisoplanatic image formation over long horizontal paths,” Opt. Eng., 51 101704 (2012). https://doi.org/10.1117/1.OE.51.10.101704 Google Scholar

26. 

G. Monnier, F.-R. Duval and S. Amram, “GPU-based simulation of optical propagation through turbulence for active and passive imaging,” Proc. SPIE, 9242 92421R (2014). https://doi.org/10.1117/12.2067492 PSISDG 0277-786X Google Scholar

27. 

A. Schwartzman et al., “Turbulence-induced 2d correlated image distortion,” in IEEE Int. Conf. Comput. Photogr. (ICCP), 1 –13 (2017). https://doi.org/10.1109/ICCPHOT.2017.7951490 Google Scholar

28. 

V. Jain et al., “Supervised learning of image restoration with convolutional networks,” in IEEE 11th Int. Conf. Comput. Vision, 1 –8 (2007). https://doi.org/10.1109/ICCV.2007.4408909 Google Scholar

29. 

K. Zhang et al., “Learning deep cnn denoiser prior for image restoration,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognit., (2017). https://doi.org/10.1109/CVPR.2017.300 Google Scholar

30. 

X. Mao, C. Shen, Y.-B. Yang, “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Adv. Neural Inf. Process. Syst. 29, 2802 –2810 (2016). Google Scholar

31. 

Y. Chang et al., “HSI-DeNet: hyperspectral image restoration via convolutional neural network,” IEEE Trans. Geosci. Remote Sens., 57 (2), 667 –682 (2019). https://doi.org/10.1109/TGRS.2018.2859203 IGRSD2 0196-2892 Google Scholar

32. 

W. Dong et al., “Denoising prior driven deep neural network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., 41 (10), 2305 –2318 (2019). https://doi.org/10.1109/TPAMI.2018.2873610 ITPIDJ 0162-8828 Google Scholar

33. 

C. Dong et al., “Image super-resolution using deep convolutional networks,” IEEE Trans. Pattern Anal. Mach. Intell., 38 (2), 295 –307 (2016). https://doi.org/10.1109/TPAMI.2015.2439281 ITPIDJ 0162-8828 Google Scholar

34. 

X. Ji, Y. Lu and L. Guo, “Image super-resolution with deep convolutional neural network,” in IEEE First Int. Conf. Data Sci. in Cybersp., 626 –630 (2016). https://doi.org/10.1109/TPAMI.2015.2439281 Google Scholar

35. 

M. Kawulok et al., “Deep learning for multiple-image super-resolution,” IEEE Geosci. Remote Sens. Lett., 17 (6), 1062 –1066 (2020). https://doi.org/10.1109/LGRS.2019.2940483 Google Scholar

36. 

J. Chen et al., “Single image super-resolution based on deep learning and gradient transformation,” in IEEE 13th Int. Conf. Signal Process., 663 –667 (2016). https://doi.org/10.1109/ICSP.2016.7877915 Google Scholar

37. 

J. Kim, J. K. Lee and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conf. Comput. Vision and Pattern Recognit., 1646 –1654 (2016). https://doi.org/10.1109/CVPR.2016.182 Google Scholar

38. 

H. Elwarfalli and R. C. Hardie, “Fifnet: a convolutional neural network for motion-based multiframe super-resolution using fusion of interpolated frames,” Comput. Vision Image Understanding, 202 103097 (2021). https://doi.org/10.1016/j.cviu.2020.103097 Google Scholar

39. 

J. Gao, N. Anantrasirichai and D. Bull, “Atmospheric turbulence removal using convolutional neural network,” (2019). Google Scholar

40. 

J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB, SPIE Press(2010). Google Scholar

41. 

R. C. Hardie et al., “Fusion of interpolated frames superresolution in the presence of atmospheric optical turbulence,” Opt. Eng., 58 (8), 083103 (2019). https://doi.org/10.1117/1.OE.58.8.083103 Google Scholar

42. 

R. K. Tyson, Adaptive Optics Engineering Handbook, CRC Press(1999). Google Scholar

43. 

L.-K. Yu et al., “Isoplanatic angle in finite distance: experimental validation,” Opt. Eng., 54 (2), 024105 (2015). https://doi.org/10.1117/1.OE.54.2.024105 Google Scholar

44. 

J. W. Goodman, Introduction to Fourier Optics, 3rd ed.Roberts and Company Publishers(2004). Google Scholar

45. 

N. S. Kopeika, “Effects of aerosols on imaging through the atmosphere: a review of spatial frequency and wavelength dependent effects,” Opt. Eng., 24 (4), 707 –712 (1985). https://doi.org/10.1117/12.7973557 Google Scholar

46. 

D. A. LeMaster and M. T. Eismann, “Impact of atmospheric aerosols on long range image quality,” Proc. SPIE, 8355 83550F (2012). https://doi.org/10.1117/12.918286 PSISDG 0277-786X Google Scholar

47. 

M. T. Eismann and D. A. LeMaster, “Aerosol modulation transfer function model for passive long-range imaging over a nonuniform atmospheric path,” Opt. Eng., 52 (4), 046201 (2013). https://doi.org/10.1117/1.OE.52.4.046201 Google Scholar

48. 

S. R. Bose-Pillai et al., “Estimation of atmospheric turbulence using differential motion of extended features in time-lapse imagery,” Opt. Eng., 57 (10), 104108 (2018). https://doi.org/10.1117/1.OE.57.10.104108 Google Scholar

49. 

E. Agustsson and R. Timofte, “Ntire 2017 challenge on single image super-resolution: dataset and study,” in IEEE Conf. Comput. Vision and Pattern Recognit. Workshops, 1122 –1131 (2017). https://doi.org/10.1109/CVPRW.2017.150 Google Scholar

50. 

Z. Wang et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 13 (4), 600 –612 (2004). https://doi.org/10.1109/TIP.2003.819861 IIPRE4 1057-7149 Google Scholar

Biography

Matthew A. Hoffmire received his master’s degree in electrical engineering from the University of Dayton in December 2020. He worked as a graduate research assistant with the Air Force Research Laboratory, Wright-Patterson AFB, OH, after receiving his BS degree in computer engineering at the University of Dayton in 2018. His research interests include digital signal and image processing and machine learning.

Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering at the University of Dayton and holds a joint appointment in the Department of Electro-Optics and Photonics. He received the University of Dayton’s top university-wide teaching award in 2006 (the Alumni Award in Teaching). He also received the Rudolf Kingslake Medal and Prize from SPIE in 1998 for work on super-resolution imaging. His research interests include a wide range of topics in signal and image processing, image restoration, medical image analysis, and machine learning.

Michael A. Rucci received his BS and MS degrees in electrical engineering from the University of Dayton in 2012 and 2014, respectively. He is a research engineer at the Air Force Research Laboratory, Wright-Patterson AFB, OH, and his current research interests include day/night passive imaging, turbulence modeling and simulation, and image processing. He and three other researchers received the Rudolf Kingslake Medal from SPIE in 2018 for their work in turbulence modeling.

Richard Van Hook is a research engineer at the Air Force Research Laboratory, Wright-Patterson Air Force Base, Ohio. He received his MS and BS degrees in computer engineering from Wright State University in 2014 and 2008, respectively. His research interests include image processing, atmospheric modeling, and TM.

Barry K. Karch is a principal research electronics engineer in the Multispectral Sensing & Detection Division, Sensors Directorate of the Air Force Research Laboratory, Wright-Patterson AFB, OH. He received his BS degree in electrical engineering (1987), MS degree in electro-optics and electrical engineering (1992/1994), and PhD in electrical engineering (2015) from the University of Dayton, Dayton, Ohio. He has worked in the areas of EO/IR remote sensor system and processing development for 30 years.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Matthew A. Hoffmire, Russell C. Hardie, Michael A. Rucci, Richard Van Hook, and Barry K. Karch "Deep learning for anisoplanatic optical turbulence mitigation in long-range imaging," Optical Engineering 60(3), 033103 (23 March 2021). https://doi.org/10.1117/1.OE.60.3.033103
Received: 23 January 2021; Accepted: 3 March 2021; Published: 23 March 2021
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Turbulence

Computer simulations

Point spread functions

Optical turbulence

Optical transfer functions

Optical engineering

Image processing

Back to Top