Open Access
22 June 2020 Measurements of optical turbulence over 149-km path
Author Affiliations +
Abstract

An experiment was conducted to study turbulence along a 149-km path between the Mauna Loa and Haleakala mountain tops using digital cameras and light-emitting diode (LED) beacons. Much of the path is over the ocean, and a large portion of the path is 3 km above sea level. On the Mauna Loa side, six LED beacons were placed in a roughly linear array with pair spacings from 7 to 62 m. From the Haleakala side, a pair of cameras separated by 83.8 cm observed these beacons. Turbulence along the path induces tilts on the wavefronts, which results in displacements of the LED spots in the images. The image motion is caused by unwanted noise sources such as camera platform motion. Differential motion between spots cancels much of this noise, and this differential motion is weighted by the turbulence along the path in different ways depending on the geometry between the sources and the cameras. A camera motion insensitive weighting function is developed to deal with this observational issue. A linear combination of these weighting functions is then used to generate a composite weighting function, which better rejects turbulence near the sources and receivers and is most sensitive to turbulence in the portion of the path out over the ocean. This technique is used to estimate turbulence in this region. The long range involved caused very strong scintillation in the image, which added new challenges to the data processing. A resulting estimate for Cn2 of 4  ×  10  −  17  m  −  2  /  3 is in good agreement with the Hufnagel–Valley HV5/7 model and the results of numerical weather modeling.

1.

Introduction

Turbulence has been experimentally characterized along a 149-km path between the Mauna Loa and Haleakala mountain tops in Hawaii through an imaging experiment. Six light-emitting diode (LED) beacons were placed on the Mauna Loa side, which were observed by a pair of cameras on the Haleakala side. The relative motion between the imaged LED spots is used to estimate turbulence.

There exist many different techniques for turbulence measurement and estimation. Turbulence is being studied here because of its effects on optical system performance, and so optical means of estimating turbulence are especially attractive because they sense turbulence in the same way turbulence (usually) degrades imaging performance. Many optical approaches produce an integrated or path-averaged value for turbulence, albeit with different weighting functions along the path, depending on the method. Scintillometers1 and Hartmann turbulence sensors2 are such two methods producing estimates weighted toward the center and detector ends of the path, respectively. Some optical approaches can profile turbulence along the path producing independent estimates for turbulence at different locations along the path. The difference in differential tilt-variance technique3 and SCIDAR4,5 are examples of this. The LED array technique of Gladysz6 also uses wavefront tilts such as the method discussed in this paper; however, their work does not address the distribution of turbulence along the sensed path. Slope detection and ranging (SLODAR) estimates turbulence along the path using the Fourier transform of the slope covariances between the aperture spacings.7 This paper cannot use this approach since only a pair of cameras are used and thus only a single aperture spacing is present. Nonoptical methods to measure turbulence include sonic anemometry,8 which is a point measurement technique, and weather radar, which can profile, but rather coarsely.9 Satellite measurements have also been used to estimate turbulence, while the measurements here are optical in nature, it is temperature rather than a disturbance to an image that is initially measured and the turbulence estimate is derived from the large-scale measured temperature gradient.10 Coarse resolution (5 to 45 km horizontally and100 m to 1 km vertically) volume distributions of optical turbulence can also be derived from numerical weather prediction (NWP) temperature fields in the same manner as those estimated from satellite-derived temperature fields, but this NWP and satellite method of turbulence calculation has yet to be validated against optically- or image-derived turbulence estimates.

This paper explains the use of an imaging technique to quantify turbulence. This technique, as employed here, is not used to produce a turbulence profile, and it would not be good at this because the weighting functions involved are not sufficiently different from each other deep in the path. However, it does produce turbulence estimates against a number of different weighting functions, and these are used to reject the influences of turbulence near the surface and thus make estimates of turbulence, which are most strongly influenced by the turbulence in the middle of the path. Given that the Hawaii path studied here is mostly about 3 km above the ocean surface, this paper is able to make an initial “validating” comparison of optically inferred turbulence at an altitude obtained from NWP temperature gradients at the day and time of the optical measurement.

The rest of this paper consists of three sections. Section 2 discusses the path weighting functions used and how they were modified to be insensitive to camera motion. The synthesis of a new weighting function from a linear combination of others to better reject ground-level turbulence is also discussed. The section concludes with a simplified derivation of the crossing path weighting functions based on scaling the results of Fried11 and Winick and Marquis.12 Section 3 provides a description of the experiment and the techniques used in image processing. The turbulence estimates are obtained experimentally from weighting functions used, and tilts variances measured are also explained. The section continues with comparisons of these results to the Hufnagel–Valley 5/7 model and turbulence estimates from NWP. The section concludes with the lessons learned and ideas for future improvements. Section 4 concludes the results and the strengths and weaknesses of the techniques used.

2.

Methodology

2.1.

Path Weighting Functions

Path weighting functions model how turbulence along the path contributes to the expected value of the tilt-variance is observed. Following the approach of Fried11 and Winick and Marquis,12 the expected value of the differential tilt-variance between a pair of apertures looking at a pair of sources, which lie in the same plane as the apertures, is expressed as follows:13

Eq. (1)

[α1(θ2)α2(θ1)]2=2.91(16π)2D1/30LdzCn2(z)02πdφ01du[(ucos1u)u2(32u2)1u2]×[|u(1zL)|5/312{u2(1zL)2+(|sΔθz|D)2+2u(1zL)(|sΔθz|D)cosφ}5/612{u2(1zL)2+(|sΔθz|D)22u(1zL)(|sΔθz|D)cosφ}5/6].
In this expression, α1(θ2) is the two-dimensional (2-D) wavefront tilt measured for source 2 in aperture 1, and vice versa for α2(θ1); D is the diameter of the camera apertures, presumed to be the same; Cn2(z) is the turbulence at a distance z along the path; L is the total length of the path; and Δθ is the angle between the paths from camera 1 to source 2 and from camera 2 to source 1. In this equation, the tilts involved are referenced to be zero-mean quantities so the computed mean square value is a variance. The path weighting function, as used here, is obtained by skipping the integration over z for now and eliminating absolute value signs (which are unnecessary for this case because of symmetries and the limits of integration). This path weighting function fd(z) is expressed as follows:

Eq. (2)

fd(z)=2.91(16π)2D1/302πdφ01du[(ucos1u)u2(32u2)1u2]×[(u(1zL))5/312{u2(1zL)2+(sΔθzD)2+2u(1zL)(sΔθzD)cosφ}5/612{u2(1zL)2+(sΔθzD)22u(1zL)(sΔθzD)cos  φ}5/6].

The first term on the second line of this equation, (u(1zL))5/3, corresponds to the total tilt-variance seen in the apertures while the remaining terms correspond to the covariance between these tilts. The first half of this statement can be verified by completing the u and φ integrals for this part of the weighting function expression, which recovers twice the single aperture 2-D tilt-variance. The process of evaluating the definite integral using Mathematica proceeds as follows:

Eq. (3)

fd-TiltOnly(z)=2.91(16π)2D1/302πdφ01du[(ucos1u)u2(32u2)1u2]×(u(1zL))5/3=2.91(16π)2D1/32π(1zL)5/301du[(ucos1u)u2(32u2)1u2]·u5/3=2.91(16π)2D1/32π(1zL)5/310πΓ(1/3)99Γ(29/6)=12.14D1/3(1zL)5/3.

Multiplying out the left side of Eq. (1) and extracting just these tilt-only weighting function parts results in

Eq. (4)

α12(θ2)+α22(θ1)=0Ldzfd-TiltOnly(z)Cn2(z)=12.14D1/30Ldz(1zL)5/3Cn2(z).

The definition of the spherical wave Fried parameter,14,15 copied below allows this expression to be put in terms of r0 as expressed below:

Eq. (5)

r0,sw=[0.423  k20ΔzCn2(z)(zΔz)5/3dz]3/5.

Recognizing that z increases from the aperture to the source in Eq. (4), but from the source to the aperture in Eq. (5), the integrations in these equations are equivalent and substitution results in twice the known expression14 for the 2-D tilt-variance.

Eq. (6)

α12(θ2)+α22(θ1)=12.14D1/3r03/50.423k2=0.727D1/3λ2r03/5.

Having identified the self- and cross-variance terms in the weighting function, more complicated cases can easily be handled. Figure 1 shows the composition of a crossing path and noncrossing path differential tilts to create a camera motion invariant second difference.

Fig. 1

Composition of differential tilt-variance measurements to create a camera motion invariant second difference.

OE_59_8_081806_f001.png

Using the above, we can now compute the path weighting function for cases like this. Writing out the difference between a pair of tilt differences and squaring this quantity inside the expectation value brackets, we get the result as follows:

Eq. (7)

[α1(θ2)α2(θ1)α1(θ3)+α2(θ4)]2=α12(θ2)+α22(θ1)+α12(θ3)+α22(θ4)2α1(θ2)α2(θ1)2α1(θ2)α1(θ3)+2α1(θ2)α2(θ4)+2α2(θ1)α1(θ3)2α2(θ1)α2(θ4)2α1(θ3)α2(θ4).
In Eq. (7), we have four self-terms when we previously had two, and they will all give the same weighting function, but we now have six cross-terms where there was only one before. Since the actual weighting function calculation depends on the difference between the angles and not the angle itself, only four different cross-terms can be evaluated. The actual spacing of the beacons was such that every pair had a unique separation so there are 15 possible weighting functions generated. The weighting functions computed for these 15 cases are shown in Fig. 2.

Fig. 2

Camera motion invariant weighting functions.

OE_59_8_081806_f002.png

Despite the fact that a double difference is being taken here, these weighting functions remain strong. When compared to the weighting functions (not shown) for the differential tilt-variance in a single aperture for a pair of beacons, the weighting functions in Fig. 2 have about twice the strength and better spatial diversity. If the weighting functions are computed for the simpler crossing case (i.e., the left side of Fig. 1), a notch appears where the beams cross and the weighting function goes to zero. Where the beams cross, the turbulence induces the same tilt on both paths and this cancels out in the difference. This encodes information about the distribution of turbulence along the path in the differential tilt-variances. For the motion invariant weighting functions used here, this notch does not go all the way down to zero, but it is still positioned at the crossing point. Armed with this information, it is clear that the lowest curve in Fig. 2 corresponds to the smallest separation between beacons since the crossing point moves toward the beacons as they get closer together. Another nice feature of these weighting functions for our purposes here is that they go to zero on either end of the path. That is, data derived through the camera motion invariant tilt calculation will reject turbulence near the ends of the path. Since the beam path is only near the ground at the start and end of the path, this feature rejects near ground-level turbulence from the measurements. Another way to look at the composite weighting function in Fig. 1 is to note that the beam paths captured in a single camera have opposite signs, so camera motion cancels out in each camera. It is presumed here that this motion is of the camera’s pointing direction. If the camera was to twist about its pointing direction, this effect would not cancel out. However, some microradians of pointing error would cause much apparent image motion, whereas a few microradians of twist would cause very little.

2.2.

Synthesizing a New Weighting Function

The weighting functions in Fig. 2 can be combined to synthesize a new weighting function, which is most sensitive to turbulence deep in the path. There are only 15 different curves and after about the first third of the path all are very similar, so this technique will clearly be incapable of accurately profiling the entire path. If we have a desired weighting function along the path, Fdes, presumed to be a vector sampled along the path, we ask: what weights, W, should we put on each of the 15 weighting function curves to best approximate Fdes? We will reuse a technique we have previously employed to compute this.16 These unknown weights can be related to the desired function by a matrix M, where M is a discretized version of the weighting function curves. Each column of M corresponds to one of the 15 weighting functions, and each row corresponds to the distance at which they were discretized. This relationship is given as Eq. (8).

Eq. (8)

Fdes=MW.

We chose to sample the functions at every 100 m, so the matrix M has 1490 rows and 15 columns. Now we use the Moore–Penrose pseudoinverse to compute the weights through the following equation:

Eq. (9)

W=M+Fdes,
where M+ is the pseudoinverse. In this case, we experimented with rectangle functions in the middle third or half of the path as well as scintillometer style weighting functions for the desired weighting function, but little difference was seen in the final results. The actual weighting function generated by this procedure can be seen by examining MW. The weights computed in this fashion are optimal in a least squares sense for achieving Fdes. But given the small number of parameters involved and the similarity of the weighting functions, the match between the actual weighting function, Fact, and Fdes might not be very close. Figure 3 shows the weighting function resulting from this procedure for a desired weighting function, which is unity in the middle third of the path and zero elsewhere.

Fig. 3

Weighting function synthesized from linear combination of camera motion invariant weighting functions.

OE_59_8_081806_f003.png

It can be seen that this synthetic weighting function does a better job of rejecting turbulence near the camera end of the path than the constituent weighting functions, and near the source side of the path it looks just like the constituent weighting functions. It is also apparent that this synthetic weighting function is a bit noisy near the camera end of the path.

2.3.

Simplified Derivation of the Weighting Function

The tilt-variance expression given as Eq. (1) was derived13 by closely following the approach used by Fried11 and Winick and Marquis12 for the geometry involved here. It can also be derived directly from Fried’s result by substitution. By recognizing that Fried’s result remains correct even behind the aperture, the equation can be reinterpreted for a two-aperture case when the sources and apertures are all in the same plane. Fried and Winick and Marquis give the anisoplantic tilt error or the differential tilt-variance for the case of a single aperture observing a pair of stars (plane waves) as

Eq. (10)

θ02=2.91(16π)2D1/30Cn2(z)fα(z)dz,
where the weighting function is

Eq. (11)

fα(z)=02π01u[cos1(3u2u3)1u2]×{0.5[u2+2uscosω+s2]5/6+0.5[u22uscosω+s2]5/6u5/3s5/3}dudω,
and the parameter s is

Eq. (12)

s=zδα/D.

Some different choices were made in these equations from what is above. Now s is the separation between the optical paths in multiples of the aperture diameter, δα is the angle between these paths, and the weighting function fα(z) has been defined without the prefactors used previously in Eq. (2). First, the spherical wave version of this weighting function in formed by scaling the appearances of u in the curly braces by (1zL). This changes the diameter of the integration region at each plane between the light sources and the camera from the aperture diameter to the diameter of a cone whose base is at the aperture and whose apex is at the light source. This step should be seen as an explanation rather than a derivation. The parameter s equals the distance between the optical paths at each plane of z divided by the aperture diameter. For a single-aperture case, this is given as Eq. (12). In the case of a pair of apertures, this is just a simple linear relationship:

Eq. (13)

s=s0δαzD.

The separation between subapertures has been labeled as s0 to avoid confusion. Substituting these two changes into the weighting function yields

Eq. (14)

fα,sw,cr(z)=02π01u[cos1u(3u2u3)1u2]×{0.5[u2(1zL)2+2u(1zL)s0δαzDcos  ω+(s0δαzD)2]5/6+0.5[u2(1zL)22u(1zL)s0δαzDcosω+(s0δαzD)2]5/6(u(1zL))5/3(s0δαzD)5/3}dudω,
where the weighting function subscript now includes sw for spherical wave and cr for crossing path. It is worth noting that while this is called the crossing path weighting function, it is the general result for apertures and sources restricted to a plane, whether the beam paths cross or not. This weighting function can be seen as identical to that in Eq. (2) except the very last term in the curly braces. This last term is not a function of u, but when the integral over u is evaluated the u dependence in the first line gives a result of identically zero.

3.

Results

3.1.

Experimental Layout

A roughly linear array of 6 LED beacons was placed at the Mauna Loa Observatory on the island of Hawaii. These beacons were imaged by a pair of cameras located at 149 km away at the Haleakala Observatory on the island of Maui. The cameras were located outside on a single mount about 1 m above the ground and separated by 83.8 cm center to center. The cameras were simultaneously triggered by computer and captured a 10 frame burst at 40 Hz every 10 s. Both cameras were the FLIR Grasshopper 3 GS3-U3-89S6M-C Monochrome versions with 3.45  μm pixel pitch. This gives a pixel field of view of 1.7 m at the source plane. The exposure time was held fixed at 5 ms throughout this experiment. The exposure time was set to be as short as possible while maintaining good image strength in the cameras. The resulting images appear to freeze the atmosphere well, but the atmospheric coherence time has not been estimated or measured. The cameras were affixed with identical f/4 Nikon telephoto lenses with 75-mm apertures and 300-mm focal lengths. On the Mauna Loa side, the LEDs were placed on the ground in small PVC pipe fittings for weather protection and were battery powered. The LEDs were near-infrared with an 850-nm nominal wavelength and deployed perpendicularly to the imaging direction. The LEDs were unevenly spaced along this line to provide a variety of different pair-to-pair spacings and to ensure some pairs remained resolvable even if close pairs blurred together. Figure 4 illustrates the imaging path used from overhead. The path was mostly over the ocean and mostly about 3 km above the surface. Figure 5 illustrates the imaging path from the side. Because of the curvature of the earth, the closest approach to sea level occurred near the middle of the path. Most of the path that was not over water was on the Mauna Loa side where the technique was reasonably insensitive to turbulence. This feature helps to naturally reject turbulence near the surface from the measurements and will be discussed more deeply in further sections.

Fig. 4

Overhead illustration of experimental layout.

OE_59_8_081806_f004.png

Fig. 5

Sideview of experimental layout

OE_59_8_081806_f005.png

3.2.

Image Processing Techniques

The desired signal from this experiment was the wavefront tilts on the beacons appearing in the camera apertures. The wavefront tilt is just the apparent position of the source in object space divided by the distance to the object, or equivalently the position of the LED image on the focal plane divided by the focal length. The small angle approximation is well justified by the geometry involved, where the biggest raw angle will be the maximum object separation, 62 m, divided by the object distance of 149 km thus about 0.4 mrad. Camera pointing accuracy is likely a comparably large angle, but it will also be ignored here because it can have little effect on results. Figure 6 shows an hour-long average of the images captured in each camera.

Fig. 6

Average over 3600 images in right and left cameras. Data taken between 900 and 1000 UTC April 6, 2019.

OE_59_8_081806_f006.png

It is apparent on this figure that there is a problem with the data; the left camera shows lateral motion, which is not present in the right camera. The presumed explanation is that the left camera was not mounted securely on the stand. Since a 1-pixel image shift corresponds only to 11.5  μrad pointing change, other explanations such as a loose lens or detector are not ruled out. This motion was likely imperceptible during data collection. Changes to the camera pointing direction do not affect the difference between the image positions of the beacons in a single camera, but the difference of the image positions between the cameras will be corrupted. The camera motion invariant weighting functions shown in Fig. 2 were developed to deal with this issue. The spacing between the beacons from left to right was set at 15, 18, 7, 12, and 10 m.

There was also a great deal of scintillation present in the imagery, which made identifying the spots and computing their centroids challenging. When the algorithm could not identify a spot location, it used the previous centroid. The centroiding code was written in MATLAB using the image processing toolbox with some hand tuning of parameters based upon the camera imagery.

3.3.

Turbulence Estimates from Differential Tilt-Variances

The centroids of the spot images were calculated on each individual frame of data over the first 20 min of the 1-h block used for Fig. 6. This produced 1200 centroid locations for each of the six beacons and two cameras. These positions were combined to produce a camera motion invariant tilt as given by Eq. (7) and illustrated on the right side of Fig. 1. The resulting variances in pixel units and as a function of the source pair separation are shown in Fig. 7, along with the variance that would be expected for constant turbulence with a Cn2 value of 4.35×1017  m2/3 along the path.

Fig. 7

Differential tilt-variances computed for all pairs of sources for experimental data (blue) and a modeled constant turbulence level of 4.35×1017  m2/3 (gold). Theoretical turbulence level chosen to overlap with experiment at point labeled “both.”

OE_59_8_081806_f007.png

The constant turbulence level used in Fig. 7 was chosen to make the 7-m source separation match for both theory and experiment. For constant turbulence, the expected value of the desired variance is just Cn2 times the area under the weighting function curve. Figure 8 shows these results for each of the 15 pairs of sources.

Fig. 8

Estimated turbulence for each source pair separation.

OE_59_8_081806_f008.png

There is a general trend toward smaller estimated turbulence values for smaller source pair separations. This trend means the variances are getting smaller as separation decreases even more quickly than this since the area under the curve is also decreasing as the pair separation decreases. This fact is apparent on the modeled variances in Fig. 7, which are proportional to the areas under the weighting function curves. From Fig. 2, it can be seen that smaller source separations give proportionally more weight to turbulence deeper in the path. This is most apparent in the motion of the main peak of the weighting function, but it could also be shown by calculating the first moment of the weighting functions.

Each of the computed variances can be used to produce an estimate for turbulence. Now we just divided the computed differential tilt-variance by the area under the corresponding weighting function curve. Figure 8 shows these results. There is a clear, but noisy, trend toward smaller estimated turbulence values as the source pair separation decreases. Together with the weighting function trends discussed in the last paragraph, this suggests turbulence is lower in the middle of the path, which is expected, and that the technique used here is successfully rejecting turbulence near the beginning of the path. This raises the question: what can we say about turbulence level out over the ocean and well above ground? The synthetic weighting function from Sec. 2.2 will be used to address this question. This synthetic weighting function produces a single estimated turbulence value. If the small singular values in the pseudoinverse are not controlled first, a nonphysical negative number results for Cn2; this can be corrected by applying a threshold to the pseudoinverse. By cutting off small singular values until a positive result emerges, the Cn2 value estimated for turbulence is 4×1017  m2/3. This is within the range of the lower values found on Fig. 8. Since this value was obtained by constraining the output of the estimation process to be positive, it is probably safest to say that the turbulence averaged over the portion of the path over water is estimated to be less than this value.

3.4.

Comparing Turbulence Estimates and Models

The result of the previous section can be compared to the amount of turbulence expected at this location by turbulence models and by other means. The Hufnagel–Valley 5/7 model is the standard profile used for turbulence.17 This model predicts turbulence at a height of 3000 m, which is quite close to what is estimated here. Turbulence can also be estimated from numerical weather models using the Tatarskii relation.10 This approach mostly uses the modeled temperature gradient to estimate turbulence. For the location and time of the experimental data presented here, LEEDR18 was used with a numerical weather input from the global forecasting system to produce a turbulence profile. At 3000 m of altitude, this prediction matched closely with the other estimates These results are shown in Fig. 9.

Fig. 9

Turbulence estimated from experiment with Hufnagel–Valley 5/7 turbulence profile as well as a turbulence profile generated from NWP using the Tatarskii relation.

OE_59_8_081806_f009.png

3.5.

Lessons Learned and Future Work

The cameras and lenses were mounted on a single platform and were relatively close together. This was done in the hope that any motion of the system would be common in both cameras, but it is clear from the data that this hope was unfounded. If the cameras were further apart, the weighting functions would have been better for discriminating turbulence deep in the path, and since a camera invariant tilt has been discovered, there is no reason for not using a longer baseline.

The scatter of the results seen in Figs. 7 and 8 seems likely to be a noise issue. Perhaps it is attributable to incorrectly assigned centroids. The points scattered highest above the general trend involve at least one of the LEDs from the closest pair, where miss assignment or missing spots seems most probable. Spot checks of the values produced against the actual images have not revealed any obvious errors. Work on improved centroid measurement from this data is ongoing. More data from this experiment remain to be analyzed and once the variances are more well tamed this will proceed.

The Greenwood frequency has not been estimated for this experiment, but that can be done from the numerical weather data using both wind data and turbulence estimates. This could verify or refute the proposition that the 5-ms camera integration time was sufficiently fast to freeze the atmosphere.

4.

Conclusions

In conclusion, a method has been developed and experimentally demonstrated to estimate turbulence along an optical path in a region far from the source and detector. The demonstration used relatively low-cost camera equipment and LED beacons along a 149-km path between mountain peaks in Hawaii. A composite differential tilt measurement was found, which is insensitive to camera motion. Estimates of turbulence agree well with other sources.

Acknowledgments

This work was supported by the Office of Naval Research and the Joint Directed Energy Technology Transition Office through a Multidisciplinary Research Initiative. The views expressed in this paper are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense, or the U.S. Government.

References

1. 

T. Wang, G. R. Ochs and S. F. Clifford, “A saturation-resistant optical scintillometer to measure Cn2,” J. Opt. Soc. Am., 68 334 –338 (1978). https://doi.org/10.1364/JOSA.68.000334 JOSAAH 0030-3941 Google Scholar

2. 

T. J. Brennan and D. C. Mann, “Estimation of optical turbulence characteristics from Shack–Hartmann wavefront sensor measurements,” Proc. SPIE, 7816 781602 (2010). https://doi.org/10.1117/12.862808 PSISDG 0277-786X Google Scholar

3. 

M. Whiteley, D. Washburn and L. Wright, “Differential-tilt technique for saturation-resistant profiling of atmospheric turbulence,” Proc. SPIE, 4494 221 –232 (2002). https://doi.org/10.1117/12.454795 PSISDG 0277-786X Google Scholar

4. 

R. Avila, J. Vernin and E. Masciadri, “Whole atmospheric-turbulence profiling with generalized SCIDAR,” Appl. Opt., 36 (30), 7898 –7905 (1997). https://doi.org/10.1364/AO.36.007898 APOPAI 0003-6935 Google Scholar

5. 

V. Kluckers et al., “Profiling of atmospheric turbulence strength and velocity using a generalised SCIDAR technique,” Astron. Astrophys. Suppl. Ser., 130 (1), 141 –155 (1998). https://doi.org/10.1051/aas:1998217 AAESB9 0365-0138 Google Scholar

6. 

S. Gladysz, “Nearly complete characterization of optical turbulence with an LED array,” in Imaging and Appl. Opt. (3D, AIO, COSI, IS, MATH, pcAOP), (2017). Google Scholar

7. 

M. Goodwin, C. Jenkins and A. Lambert, “Improved detection of atmospheric turbulence with SLODAR,” Opt. Express, 15 14844 –14860 (2007). https://doi.org/10.1364/OE.15.014844 OPEXFF 1094-4087 Google Scholar

8. 

J. C. Kaimal, “Sonic anemometer measurement of atmospheric turbulence,” in Proc. Dyn. Flow Conf. Dyn. Meas. Unsteady Flows, (1978). Google Scholar

9. 

L. Burchett and S. Fiorino, “Measurement of Cn2 profiles from weather radar data and other microwave signals and conversion to visible and NIR Cn2 profiles,” in Imaging and Appl. Opt., (2013). Google Scholar

10. 

S. Fiorino, “Satellite and radar measurement of CT2, Cn2, and Cv2,” in Imaging and Appl. Opt., (2014). Google Scholar

11. 

D. Fried, “Varieties of isoplanatism,” Proc. SPIE, 0075 20 –29 (1976). https://doi.org/10.1117/12.954733 PSISDG 0277-786X Google Scholar

12. 

K. Winick and D. Marquis, “Stellar scintillation technique for the measurement of tilt anisoplanatism,” J. Opt. Soc. Am. A, 5 (11), 1929 –1936 (1988). https://doi.org/10.1364/JOSAA.5.001929 JOAOD6 0740-3232 Google Scholar

13. 

S. R. Bose-Pillai et al., “Profiling atmospheric turbulence using time-lapse imagery from two cameras,” Proc. SPIE, 11001 1100112 (2019). https://doi.org/10.1117/12.2519022 PSISDG 0277-786X Google Scholar

14. 

R. J. Sasiela, Electromagnetic Wave Propagation in Turbulence: Evaluation and Application of Mellin Transforms, 2nd ed.SPIE Press, Bellingham, Washington (2007). Google Scholar

15. 

J. D. Schmidt, Numerical Simulation of Optical Wave Propagation, SPIE Press, Bellingham, Washington (2010). Google Scholar

16. 

J. E. McCrae, S. R. Bose-Pillai and S. T. Fiorino, “Estimation of turbulence from time-lapse imagery,” Opt. Eng., 56 (7), 071504 (2017). https://doi.org/10.1117/1.OE.56.7.071504 Google Scholar

17. 

M. Roggemann and B. Welsh, Imaging through Turbulence, CRC Press, Boca Raton, Florida (1996). Google Scholar

18. 

S. T. Fiorino et al., “Validation of a UV-to-RF high-spectral-resolution atmospheric boundary layer characterization tool,” J. Appl. Meteor. Climatol., 53 (1), 136 –156 (2014). https://doi.org/10.1175/JAMC-D-13-036.1 Google Scholar

Biography

Jack E. McCrae received his PhD in physics from the Air Force Institute of Technology (AFIT) in 1997, an MS degree in optical physics from AFIT in 1993, and a BS degree in physics from MIT in 1984. He is a retired Air Force Colonel and currently a research assistant professor in the Engineering Physics Department at AFIT. His research interests include optics, lasers, quantum and nonlinear optics, laser radar, atmospheric propagation, and imaging.

Santasri R. Bose-Pillai received her PhD in electrical engineering with a focus in optics in 2008 and her MSEE in 2005, both from New Mexico State University. She received her BSEE with honors from Jadavpur University, India, in 2000. Currently, she is a research assistant professor in the Engineering Physics Department at AFIT Center for Directed Energy. Her research interests include propagation and imaging through atmosphere, telescope pointing and tracking, rough surface scattering, and laser communications. She is a member of OSA, SPIE, and DEPS.

Steven T. Fiorino received his BS degrees in geography and meteorology from Ohio State (1987) and Florida State (1989) universities. He additionally holds an MS degree in atmospheric dynamics from Ohio State (1993) and a PhD in physical meteorology from Florida State (2002). He is a retired Air Force Lieutenant Colonel with 21 years of service and currently a professor of atmospheric physics in the Engineering Physics Department at AFIT and the director of the Center for Directed Energy.

Aaron J. Archibald currently serves as a research engineer for the Department of Engineering Physics, AFIT Center for Directed Energy. He earned his BS degree in engineering physics from Wright State University in 2010 and an MS degree in nanotechnology from Chuang Yuen Christian University, Taiwan, in 2012. His research supports the efforts of the Airborne Aero-Optics Laboratory through design, fabrication, and operation of experimental laser tracking system.

Joel Meoak received his BS degree in computer engineering from Cedarville University in 2018. He served as a programming research assistant in the Center for Directed Energy working on enhancements to the AFIT Directed Energy and Atmospheric Models (ADAM) and Physics-based Imaging and Tracking of Ballistics, UAVs, and LEO satellites (PITBUL) software packages. He is now on active duty as a Second Lieutenant in the US Air Force.

Brannon J. Elmore received his BS degree in computer science from Wright State University in 2014. He is the lead software developer for the Center for Directed Energy at the AFIT, Wright-Patterson Air Force Base, Ohio. His research efforts are focused on the continuous improvement of LEEDR and HELEEOS, the core applications of the AFIT directed energy and atmospheric models (ADAM) software package.

Thomas Kesler earned his BS degree in engineering physics with a computer science minor from Wright State University in 2014. He is a software engineer for the Center for Directed Energy at AFIT working on the LEEDR, HELEEOS, and PITBUL software packages. His work includes developing C++ implementations of LEEDR and HELEEOS, and incorporating optimizations that reduce code runtimes and CPU usage. His research efforts include applying scientific computing to solving problems in atmospherics and optics.

Christopher Rice currently serves as a research assistant professor in the Department of Engineering Physics at the AFIT. He earned his BS degree in electrical engineering from Cedarville University, his MS degree in electrical engineering from AFIT, and his PhD in optical sciences and engineering also from AFIT. He is interested in topic areas related to high-energy lasers, remote sensing, laser lethality, and optical diagnostics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Jack E. McCrae, Santasri R. Bose-Pillai, Steven T. Fiorino, Aaron J. Archibald, Joel Meoak, Brannon J. Elmore, Thomas Kesler, and Christopher A. Rice "Measurements of optical turbulence over 149-km path," Optical Engineering 59(8), 081806 (22 June 2020). https://doi.org/10.1117/1.OE.59.8.081806
Received: 29 November 2019; Accepted: 30 March 2020; Published: 22 June 2020
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Turbulence

Cameras

Optical testing

Light emitting diodes

Optical turbulence

Optical engineering

Ocean optics

Back to Top