Open Access
3 February 2020 Starshade formation flying I: optical sensing
Author Affiliations +
Abstract

A key challenge for starshades is formation flying. To successfully image exoplanets, the telescope boresight and starshade must be aligned to ∼1  m at separations of tens of thousands of kilometers. This challenge has two parts: first, the relative position of the starshade with respect to the telescope must be sensed; second, sensor measurements must be combined with a control law to keep the two spacecraft aligned in the presence of gravitational and other disturbances. In this work, we present an optical sensing approach using a pupil imaging camera in a 2.4-m telescope that can measure the relative spacecraft bearing to a few centimeters in 1 s, much faster than any relevant dynamical disturbances. A companion paper will describe how this sensor can be combined with a control law to keep the two spacecraft aligned with minimal interruptions to science observations.

1.

Introduction

Starshades, large occulters designed to artificially block starlight, offer a path to imaging and spectroscopy of Earth-like extrasolar planets. The carefully shaped petals of a starshade create a dark stellar eclipse over the entrance pupil of a space telescope by controlling diffraction so that starlight does not concentrate along the optical axis as in the “Arago spot” phenomenon. Current mission concepts envision 20- to 100-m starshades positioned tens of thousands of kilometers in front of their respective space telescopes.1,2

A key challenge in the starshade concept is formation flying, as the starshade shadow is only sufficiently dark in a region 1 to 2 m wider than the pupil of the telescope. This margin is intentional, as creating a much wider shadow requires a much larger starshade, which blocks more of the inner orbits surrounding the star. A larger starshade is also harder to build and launch. With this sized shadow, the relative bearing, or “shear,” between the starshade and telescope must be maintained to 1  m, despite the large distances between the spacecraft.3 (The separation tolerance is far less stringent, at about 250 km.) Formation flying has two components, sensing and control; sensing for determining the position of the starshade with respect to the telescope, and control for using the sensor data and onboard thrusters to efficiently maintain the required flight tolerances in the space environment. This paper will address the challenge of sensing and a companion paper4 will address the challenge of control.

A brief review of the mission concept is in order. The starshade is intended to work with a space telescope and must be launched and deployed. The 20- to 30-m optic is rolled up into a cylinder to fit in a rocket fairing and then launched into space. In space, it separates and unfurls, with petal shape tolerances of 100  μm and petal position tolerances of 1  mm being required.5 Once unfurled, the starshade uses thrusters to maneuver itself between the target star and space telescope, with typical spacecraft separations of tens of thousands of kilometers. When aligned, science observations occur, after which the starshade moves to the next target star. These retargeting maneuvers can take days to weeks, as tens of degrees between target stars translate into slews of hundreds of thousands of kilometers.

The pointing and acquisition problem can be divided into three different regimes, which we refer to as coarse, medium, and fine. In the coarse and medium regime, the relative starshade position is determined through measurements of distance and angle. The distance can be determined by a time-of-flight S-band radio link between the spacecraft, with an accuracy of 500 m, which is a negligible error given the 250-km range tolerance. In the coarse regime, at relative separations of less than 600 km from the target, a wide-angle (3  deg) laser beacon on the starshade can be used in conjunction with an external star camera. Here, the angles between the starshade (which may be identified as a blinking or uncatalogued point source) and a set of reference stars may be measured with a star tracker camera to better than 2 arc sec,6 corresponding to a shear accuracy of 400 m for a typical separation of 40,000 km. A switch to the medium-sensing mode occurs once the angular separation from the target star is less than a few arc seconds. Here, the telescope’s internal science camera is used to sense the starshade position, using the same concept of differential measurement of point-spread functions (PSFs), but with a much finer accuracy of 20 mas, or about 4 m. This is maintained until the starshade begins to occult the star at a separation of about 15 m. At this point, the two PSFs are no longer well separated in the science camera, and a switch occurs to the fine-sensing mode. (In flight, we anticipate the possibility of a brief transition period without good sensor measurements between the medium and fine sensing mode, from 30 to 10 m out. This will require some knowledge of the starshade position and velocity to plan the rocket firings.7)

The fine-sensing mode, for stationkeeping during science operations, uses an internal pupil imager of the telescope to determine the relative shear between it and the starshade. During stationkeeping, the relative bearing between the spacecraft and starshade must stay within the meter deadband, with trajectory corrections required every 10  min to compensate for the 1  μg differential gravitational force and solar radiation pressure. These corrections are accomplished through starshade thruster firings, which cause ballistic trajectories within the deadband. The overall speed of these maneuvers is fairly slow, with maximum velocities of 2  cm/s at the deadband boundary.

2.

Optical Sensing

The 1-m stationkeeping demand on a starshade mission leads to sensing needs that are more stringent than 1 m. A starshade technology maturation program called S58 defined the sensing requirements, such that the 3-σ error signal would be <30  cm, for a 2.4-m telescope aperture. While no requirement on sensing cadence was given, the rate must be high enough to permit control within the 1-m deadband under the disturbances experienced in flight (Fig. 1).

Fig. 1

Outline of the formation flying problem. The 20 to 80 mm position between the telescope and starshade must be maintained in a cylinder of 1 m in radius and 250 km in length. The radial control consists of firing the starshade’s thrusters to execute ballistic trajectories within the 1 m deadband to counteract gravitational accelerations of 10 to 20  μm/s2.

JATIS_6_1_015003_f001.png

Starshades are designed to operate in the Fresnel regime, where the dimensionless “Fresnel number” Fr2/(λZ) is in an intermediate regime of F>10. Here, r is the starshade radius, λ the wavelength, and Z is the separation between the starshade and telescope, and optical propagation physics is preserved when the Fresnel number is the same. Spectroscopy of exoplanets requires rather a broad wavelength coverage to measure different spectral features. Interesting molecular signatures exist from 400 to 800 nm, including from oxygen, ozone, and water, and in the case of the Earth, the “red edge” at 700 nm pointing to the presence of vegetation. However, moving from 400 to 800 nm changes the Fresnel number by a factor of 2, which is challenging to accommodate in the optical design. This leads to the definition of “science bands” corresponding to different spectral regions, where the starshade must change its distance from the telescope, so that λZconstant. For example, in the case of the Wide Field Infrared Survey Telescope (WFIRST) starshade concept, the blue science band is from 400 to 600 nm, at a separation of about 40,000 km; the green from 600 to 800 nm, at separations of 30,000 km; and the red at 800 to 1000 nm, at separations of 20,000 km (Fig. 2).

Fig. 2

Plot of the starshade suppression for the red, green, and blue science bands.

JATIS_6_1_015003_f002.png

Starshade suppression rapidly degrades when operated outside of the designed science band (ibid). For example, for the green science band, the starlight suppression at 600 nm is nearly 107 times higher than at 500 nm, despite these wavelengths being only 100 nm apart. This bright “leaked” light must be blocked internally in the telescope using bandstop filters. Note that the starshade only suppresses the on-axis starlight. Off-axis planet light is unaffected.

It is this out-of-band stellar “leakage” that is actually key to sensing the shear. The starshade cannot effectively suppress light at these wavelengths, and it focuses behind the starshade as a bright core of light, similarly to the classical spot of Arago. At these intermediate distances and Fresnel numbers, the spot width is on the order of tens of centimeters, surrounded by a dark ring and complex diffraction artifacts due to the petals (Fig. 3). While the light distribution does not meaningfully change with starshade distance deltas of hundreds of kilometers (the Fresnel number stays almost the same), it precisely tracks the shear offset of the starshade: if the starshade moves vertically by 25 cm, the pattern will move vertically by that same amount. Thus, the offset of the spot and surrounding light with respect to the center of the telescope pupil can be used to determine the shear between the starshade and the telescope.

Fig. 3

(a) 20×20  m image of out-of-band light pattern at the telescope pupil, stretched to show detail. A transparent image of the telescope pupil at the same scale is overlaid. (b) A simulated image of the telescope’s internal pupil camera, showing the Arago spot slightly offset from center. (c) The noisy pupil image is compared against a library of precomputed images with known shear offsets, and the best match corresponds to the relative shear of the starshade and telescope, in this case, at shear offset (0.0, 0.6).

JATIS_6_1_015003_f003.png

In order to effectively sense this signal, some way of measuring the light distribution in the pupil of the telescope is needed. (Analyzing this light in the focal plane is much less effective, as shear changes lead to only subtle differences in the PSF.) Pupil sensors, which directly image this light distribution, are not common in space telescopes, as they either require a movable optic to shift the focus to the position of the pupil or a separate camera. However, they are critical elements in high contrast imaging systems, where they can be combined with interferometric elements or lenslets to directly sense internal wavefront aberrations, which can then be corrected by deformable mirrors. Typically, the pupil is imaged onto a small-format detector for high readout speeds. A familiar example of a pupil sensor is the Shack–Hartmann wavefront sensor.

Previous work has examined the viability of optical sensing to determine the shear of the starshade with respect to the telescope. Noecker9 reviewed some potential methods of sensing the relative shear and introduced the concept of using pupil sensing using a set of outrigger telescopes on “booms” to measure the relative gradient of light. A similar implementation was proposed in Sirbu and Kasdin,10 which had an internal infrared octant sensor to provide the sensing information. Harness and Cash11 developed an analytic method of centroiding using pupil plane images that is similar to the method proposed in this work. Image plane sensing is also possible; Scharf et al.7 presented a method using the science camera by using difference images of the starshade (and its laser beacon) and target star. Image plane sensing is more challenging to implement, given the small angles involved, and has an expected measurement precision of 1 m (3-σ). (ibid) It is not able to reach the performance of pupil-plane schemes.

The work presented in this paper builds on the previous developments, with the goal of characterizing the precision of pupil plane sensing with realistic inputs for the radiometric error budget, assuming a WFIRST-sized telescope and the pupil camera approach of Harness and Cash. We present analytic calculations and detailed numerical simulations of the expected sensing performance and validate them against laboratory experiments performed at similar Fresnel numbers and signal-to-noise ratios (SNRs). Unlike the analytic spot centroiding algorithm of Harness and Cash, we develop a sensing algorithm that is based on image matching, where the data from the pupil camera are compared against a library of precomputed pupil images corresponding to different starshade offsets; see Fig. 3. This is a brute-force algorithm but is tractable given the minimal degrees of freedom in the problem (just two, the offset in the horizontal and vertical direction). This is essentially equivalent to matched filtering, and as such, should be optimal, with uncertainties in position driven by photon and detector noise, and no imperfections in the matching algorithm. While the algorithm appears to be easily manageable on a flight computer, simpler algorithms (such as gradient-based centroiding) could be developed that has a much lower memory footprint and faster speed.

Portions of the text below (particularly describing experimental setup and design) are repeated verbatim from a previous conference proceeding12 and the technical report from NASA’s S5 program describing the formation flying milestone.13

2.1.

Radiometry

The amount of photons detected by the pupil sensor depends on the star brightness, the starshade suppression, the internal optical efficiency of the telescope optics up to the sensor, and the detector quantum efficiency. All of these terms have a dependence on wavelength, of course.

2.1.1.

Stellar input flux

For the stellar models, we used a solar-type model from the ATLAS9 library14 (Teff=5750  K, log[g]=4.5, [Fe/H]=0). We validated the stellar spectrum code against a standard measured solar irradiance spectrum (ASTM E-490), finding agreement in spectral flux density at the few percent level from 300 to 1000 nm. Validation against filter photometric zero points also agreed to within a few percent. The stellar spectral type is a minor contributor to the photon budget, as this causes variations in flux at most a factor of 3 for stars of the same MV at any of the relevant wavelengths. The main contributor to the stellar photon budget is the apparent magnitude of the target stars, which range from MV=1.5 to 5.3, a factor of about 500.

2.1.2.

Starshade optical transmission

The starshade optical transmission plays the largest role in the overall photon budget, as it varies by 6 to 7 orders of magnitude between the science bands (deep suppression at 1010) and in the sensing bands (103 to 104), with a steep transition with a wavelength of Cλx, where x15, as shown in Fig. 2. Starshade optical models have been validated at better than the 1010 level in laboratory demonstrations (Harness et al., in preparation). In this work, we are primarily interested in the performance at 103 to 104, where optical modeling uncertainties are negligible.

2.1.3.

Telescope efficiency

In the case of starshade operations with the WFIRST coronagraph, the optical train does not use any of the complex masks or stops that make the coronagraph so effective at suppressing starlight. These are simply moved out of the way, leading to much higher throughput on the science and spectrograph cameras. The deformable mirrors and other adaptive optics are also not actively controlled but set to predetermine “flat” setpoints. (A “rough” correction of 10-nm wavefront error, which would be unacceptable for coronagraphic performance, is still better than 98% Strehl ratio, so has a negligible effect on starshade planet sensitivity.) There are about 20 optical surfaces between the telescope pupil and the pupil sensor (Fig. 4), which is not uncommon for a coronagraph, but leads to throughput losses compared to a purpose-built pupil imaging camera. Additionally, while it has not yet been decided what kind of filter (if any) will be present in front of the pupil camera (the low-order wavefront sensor, or LOWFS) for starshade operations, we chose to limit the spectra to broad bandpasses that corresponded to the peak out-of-band light for the different science modes (see Fig. 2). These correspond to 400 to 540 nm for the red band, 400 to 435 nm for the green band, and 870 to 900 nm for the blue band. Operating much outside than these bandwidths will not have a large effect, as the starshade suppression increases dramatically with wavelength.

Fig. 4

The formation flying path of the WFIRST coronagraph uses all the optics up to the LOWFS. Red x’s refer to optics moved out of beam for starshade operations. Figure adapted from Tang et al.15

JATIS_6_1_015003_f004.png

To compute the total efficiency of the LOWFS camera, we combined transmission curves of the individual optical elements from the WFIRST coronagraph optical design (Hong Tang, private communication) and the camera quantum efficiency from the manufacturer datasheet (CCD201 from Teledyne e2V).16 Figure 5 shows the resulting curves, with the total efficiency not exceeding 50% anywhere in the relevant bandpass. The wavelength cutoffs are set by aluminum and silver reflectance at the blue end and detector quantum efficiency at the red end. While the optical design is not finalized, it is not the driving factor for sensing performance, as we will show that changes in throughput of a factor of 2 will not preclude accurate sensing.

Fig. 5

Plots of the CCD quantum efficiency, optical efficiency, and combined efficiency of the WFIRST coronagraph (the input to our radiometric model).

JATIS_6_1_015003_f005.png

2.2.

Analytic Calculations

It is possible to get a rough estimate of the sensing performance using simple scaling arguments, in particular, the “centroid accuracy” formula:

Eq. (1)

σx=FWHMc·SNR,
where σx is the spot centroid accuracy (1σ), FWHM is the spot full-width at half-maximum, SNR is the spot SNR, and c is a constant of order unity that depends on the exact morphology of the PSF. This formula is used in astrometry,17,18 with a value of c=2 being appropriate for Gaussian or Moffat-like stellar profiles.

We calculate the width of the Arago spot FWHM for the numerator using an analytic approach assuming the starshade is circular (the petals suppress contrast but have a minor effect on spot size). In this case, the intensity distribution near the optical axis has a functional form of

Eq. (2)

I(x,y)J02(2π(x2+y2)1/2rλz),
where x, y are the coordinates at the pupil of the telescope; J0 refers to the zeroth Bessel function of the first kind; and r, λ, z are the radii of the starshade, wavelength of light, and distance from the starshade to the telescope. The FWHM of this distribution is

Eq. (3)

FWHMλz/(πr).

What remains to evaluate in Eq. (1) is the SNR. For the SNR, we use the “CCD formula”:

Eq. (4)

SNR=NphNph+napRN2,
where Nph is the number of photons in the spot, nap is the number of pixels covered by the spot, and RN is the readout noise per pixel, conservatively assumed to be five electrons. This assumes that readout noise and photon noise dominate the error budget, neglecting terms relating to dark current and sky background, which are not large with the baselined electron-multiplying CCD (EMCCD) detector (the e2v CCD201) in a space environment. For the number of photons in the spot Nph, we assume all the photons at the pupil (multiplied by the system efficiency) end up in a spot with an FWHM given by Eq. (3), rather than computing the numerical diffraction pattern and considering the pupil obscurations and numerical propagation through the telescope optics.

Figure 6 shows the results of the analytic predictions. For typical target star brightnesses of MV<6, the predicted accuracy in 1 s of exposure time is better than 3 cm in all science bands, easily exceeding the 30 cm requirement. The 30 cm requirement only becomes significant at around 10th magnitude in the blue and green science bands. This implies that with a suitably designed sensing algorithm, sensing accuracy should not be a challenge for starshade operations.

Fig. 6

Analytic calculations of centroid accuracy for the red, green, and blue science band. For typical starshade target stars of MV<6, the accuracy easily exceeds the 30 cm, 3-σ requirement in 1 s of exposure time.

JATIS_6_1_015003_f006.png

This is a simple approximation of the achievable precision, considering only the starshade size, the telescope efficiency, and the detector sampling and noise. However, it gives a useful estimate of the performance which may be expected, and the scaling. As we will show later, it agrees well with the numerical results, supporting the validity of the more detailed simulations.

3.

Numerical Simulations

The analytic estimates of performance did not refer to a particular centroiding algorithm but considered theoretical limits based on spot size and SNR. The true images will not be simple spots but will have extra structure, particularly when considering pupil obscurations. A spot centroid algorithm works best with an unobscured spot and would potentially fail if the spot exited the pupil area or was blocked by the secondary mirror. These shortcomings can be fixed in principle, but this leads to a second issue, which is that any centroid algorithm will add some error to the spot position beyond the fundamental limits imposed by spot shape and photon noise. We adopted a different approach that would avoid these obscuration and visibility issues, with minimal error added by the algorithm, as will be described below.

We constructed detailed numerical simulations to create accurate models of the images seen on the pupil camera and to estimate the sensing performance. These simulations used electric field propagation from the starshade through the telescope optics. First, the electric field of the starshade was calculated using the boundary integral method of Cady.19 The electric field was then (1) shifted to account for the input offset positions, (2) multiplied by the telescope pupil aperture function to add the central obscuration, outer diameter, and spiders, (3) propagated to the internal Zernike sensor plane, (4) multiplied by the Zernike phase function, (5) propagated to the LOWFS camera, and (6) converted to an intensity. We used Fourier transforms to propagate between focal and pupil planes, and performed all propagations using the intensity-weighted wavelength, reasonable given the limited sensing bandwidth. This procedure was repeated on a 2-cm grid to build up the image library.

The Zernike sensor, an interferometric pupil phase sensor in the coronagraph, is only useful for low-order wavefront sensing. For shear sensing, it serves no useful function. Its effect on the shear signal is to create mild intensity gradients over the image and actually diffract a good deal (10% to 20%) of light outside the pupil imager, similar to a coronagraph. The reason we included is that it reduces the flux, making it a more conservative assumption, and it is expected to exist in the system in a baseline configuration. In practice, it would be possible to use the Zernike sensor to sense tip/tilt separately from shear, but this is beyond the scope of this work. Adding telescope tip/tilt jitter to the simulations had less than a 10% effect on the sensing precision, but increased computation times significantly, so we neglected to include it. We also ignored motion blur from the starshade for two reasons. First, the maximum speed of 2  cm/s will contribute at most 2 cm of sensing error for a 1-s exposure time, which is well below the required sensitivity. Furthermore, actual target stars are so bright that the exposure times will be much less than 1 s, so minimal motion blur will occur.

Here, we introduce the particular algorithm to determine the shear offset. (For a full description of the algorithm, including storage requirements and computational complexity, see Sec. 6.) Rather than using a spot centroid algorithm, we use a least-squares image matching procedure equivalent to a matched filter, which should minimally contribute to the derived uncertainty in the position. Each input image I converted into a vector of n pixels (for example, n is 1024 for a 32×32 image) is normalized by the sum of the image intensities Im=I/nI and the scalar ex,y2=(Lx,yIm)2 is calculated for each image Lx,y in the library. (All library images are also mean subtracted.) The library image with the lowest ex,y2 is selected, and its position x, y becomes the starshade position estimate.

For the numerical simulations, we used library grid spacings of 2 cm calculated over the positive quadrant of a circle of <1.3  m in radius. We tested shear positions separated by 30 cm in one quadrant of the control region. (The shadow is nearly perfectly circularly symmetric in the control region and more complex diffraction effects do not appear until 4 m from the center; see Fig. 8.) At each grid point, we used the same method to propagate the electric field through the starshade and telescope optics but reduced the amplitude of the electric field to account for stellar magnitude and optical efficiency. We generated 300 such realizations per point, added Poisson and readout noise, and then matched them to the image library, saving the best-fit positions. From these matches, we generated empirical 1-, 2-, and 3-σ error ellipses, as shown in Fig. 7.

Fig. 7

Numerical simulations of (a) red, (b) green, and (c) blue science bands for 10th, 8th, and 8th magnitude stars, respectively. The gray circle shows the 1-m control radius, and the colored ellipses show the 1-, 2-, and 3-σ errors for different sensing positions in 1 s of integration time. The dashed circle shows the 30-cm control requirement from the S5.

JATIS_6_1_015003_f007.png

Fig. 8

Numerical computation of sensing precision overlaid with the underlying light intensity. The ellipses show the 1-, 2-, and 3-σ contours in the red science band, for an eighth magnitude star, in a 4-m region surrounding the central lobe. The precision scales approximately as the second spatial derivative of the light intensity.

JATIS_6_1_015003_f008.png

The results of the numerical simulations are consistent with the earlier analytic estimates, with errors of a few centimeters being predicted at all science bands, with exposure times of 1 s, for stars of 8th to 10th magnitude (Table 1). The largest discrepancy was the blue science band, with detailed numerical simulations being about 50% worse than the analytic prediction. In the blue science band, the guiding spot (in the near-infrared) has the largest size [Eq. (3)], so that it is always partially obscured by the telescope’s secondary mirror and supports. This can be incorporated into the analytic estimate by modified, spatially dependent values for the shape parameter c. However, we used a fixed c=2 for an unobscured spot, so it is not surprising that the formula tends to predict better performance than the numerical simulations.

Table 1

Table of numerical simulation results.

Science bandStar magnitudeFlux density at pupil (photons/m2/s)Science wavelengths (nm)Guiding wavelengths (nm) [weighted]Median numerical 3-σ error (cm)Analytic 3-σ error
Blue8.08200400–600870–1000 [937]9.76.1
Green8.02100600–800400–435 [424]3.63.9
Red10.011,600800–1000400–540 [496]1.61.6

To understand the shapes of the ellipses, we might expect that at each position, the precision will depend on the second derivative of the light intensity. Constant intensity distributions will give no information as they are translation invariant. Similarly, linearly increasing distributions will give no information due to the normalization by total flux. (As a simple example, if you find yourself standing on a hill with shape y=|x|, you cannot tell where on the hill you are by examining the slope of the hill at your position. But you can for the hill y=x2.) However, higher orders will. Said another way, the covariance matrix of the error ellipse will be inversely proportional to the Hessian of the light intensity. This becomes more evident when plotting the sensing precision over a larger region, where there are both smooth and structured regions, as shown in Fig. 8.

4.

Laboratory Results

4.1.

Overview

A remaining question is whether the numerical simulations and analytic predictions give reasonable expectations for formation flying performance when implemented on actual hardware. We built the Starshade Lateral Alignment Testbed (SLATE) to validate the lateral position sensing approach in the lab. The experimental design is the same as the numerical simulations, where the starshade is moved to a predetermined offset, and the measured intensity on the camera is matched to a precomputed library of images using a least-squares algorithm. This is repeated hundreds of times to determine the accuracy of the position matching. In the following subsections, we give a description of the testbed design, hardware and optical considerations, and experimental results.

4.1.1.

Testbed design

As previously mentioned, starshades are designed to operate at Fresnel numbers Fr2/(λZ) of <20. Here, r is the starshade radius, λ is the wavelength, and Z is the separation, and most optical propagation effects are preserved when the Fresnel number is the same. Due to the large separation distances of >10,000  km, it is not possible to optically validate a full-scale starshade on Earth. However, aspects of a flight-like setup may be tested by quadratically decreasing distance Z with a starshade radius r. Scaling down the starshade by a factor of 1000 requires a separation one million times smaller, allowing for optical validation with more manageable testbeds sized 1 to 100 m.

SLATE is a beam launcher and a camera. The beam launcher consists of an optical fiber, a 100-mm doublet collimating lens, and the starshade mask. These optics are small enough to fit on a two-axis stage, creating a movable beam to simulate shear offsets corresponding to starshade motion. A fold mirror increases the propagation length on the modestly sized optical bench. The camera takes the place of the LOWFS. Figure 9 shows a schematic and image of the testbed.

Fig. 9

(a) Schematic of SLATE; movable fiber beam launcher, and fixed fold mirror and camera. (b) Image of the testbed, partially uncovered. The camera may be translated along the rail to access different Fresnel numbers.

JATIS_6_1_015003_f009.png

The camera sees “pupil” images, but we do not have a telescope simulator in the beam to create either the pupil of WFIRST, the Zernike phase plate, or any internal recollimating and refocusing optics. This is by choice, as every optical surface adds noise and complication. To simulate the telescope pupil, we just mask out the pixels on the camera corresponding to the effective pupil obscuration. These pixels are not expected to be used in the image matching algorithm in flight either.

SLATE can create optical sensing signals similar to those expected in space but deviates from a “perfect” formation flying lab setup. A summary of the differences between the test setup and flight expectation is presented in Table 2.

Table 2

Comparison of optical, detector, and morphological parameters of SLATE with the flight expectation.

ParameterFlight expectationSLATE
Fresnel number5–74.5
Light typeBroadband starlight (50–100 nm filtered)632 nm laser
Wavefront quality14-nm wavefront error>500-nm wavefront error
Camera chipe2v CCD201SBIG KAF402-me
Camera read noise2  electrons/pixel/frame40  electrons/pixel/frame
Camera dark current1.5×104  electrons/pixel/s2  electrons/pixel/s
Camera clock-induced charge0.02 electrons<1 electron
Camera shutter speeds0.001100+seconds0.1–100 s
Camera flat field calibration<2%None
Arago spot FWHM10  pixels/32×32  pixels10  pixels/32×32  pixels
Arago spot SNR5/pixel in FWHM5/pixel in FWHM

4.1.2.

Optical considerations

The contrast of the guiding signal is at 103 to 104, which is challenging to achieve optically, but nowhere near as challenging as building a testbed to simulate the optical performance of the starshade at science contrast, at 1010 to 1011. This is not just due to the optical tolerances, but the amount of light present is much higher, meaning low-noise detectors are not required. Additionally, effects that are important at the 108 to 1011 contrast levels are irrelevant for formation flying. These include edge glint from scattered sunlight, exozodiacal light, and the high likelihood of source confusion from faint background galaxies.20 However, attaining shear sensing contrast still requires some care, and we traded off optical tolerances with fidelity to the flight system. Here, we will describe some differences between the flight system and our testbed.

For the light source, we elected to use a single-mode fiber laser to simulate the starlight, rather than broadband illumination. Despite the single wavelength (632 nm), the images that a flight pupil sensor will see would be similar, because the sensing wavebands are not particularly broad at 10% optical bandwidth. Additionally, one side of the band will be much brighter than the other due to the steep wavelength dependence of starshade transmission.

The incident beam on the starshade in flight will be a flat wave of starlight with effectively no aberration. A flat beam is not an option in this testbed as diffraction from the edges of the optics would overwhelm the faint guiding spot. We used a beam from a fiber, collimated 100 mm from the starshade, as optical modeling showed the Gaussian shape would marginally affect the spot contrast while eliminating edge diffraction. Our original choice of a precision asphere to collimate the Gaussian beam failed badly due to significant mid-spatial-frequency errors in the lens. On the other hand, our optical model indicated that the spherical aberration from an off-the-shelf doublet would marginally affect the guiding signal.

Starshades are meant to be free-floating, which cannot be reproduced in the laboratory. Mounting them with “struts” will cause unacceptable diffraction unless the struts themselves are apodized similarly to the petal edges; this is the approach taken by the experiments at science contrast levels.21 In our case, the starshade was manufactured (by Opto-Line International, Inc.) by depositing chrome on an optical reference flat.

Beyond static errors like spherical aberration, the biggest optical challenge was mid- to high-spatial frequency error. We simulated the expected power spectra of the optical surface roughness of the lenses and starshade and found that operating at Fresnel numbers of F7 would require an RMS surface error of 5  nm, which is challenging to achieve without active optical control (for example, interferometer reference flats are typically specified up to 1/20th of a wave). However, the same simulations indicated that operating closer to F4 to 5 would be achievable with bulk, static optics. While this is at the lower range of flight Fresnel numbers, it allowed us to validate the simulations without the added complexities of an active wavefront control system.

4.1.3.

Camera parameters

The flight EMCCD detector for the WFIRST coronagraph, the CCD201 from e2v, is baselined for both the LOWFS and science camera. Unsurprisingly, its performance exceeds the lab detector’s performance by factors ranging from 20 (read noise) to 10,000 (dark current). As such, attempts to match exposure times and flux levels in the testbed to flight levels would result in a much lower SNR than that delivered by the EMCCD. Instead, we adjusted the exposure times and laser power to match the empirically measured SNR of the spot, which ranges from 3 to 8 depending on the wavelength.

Another unknown at this time is the pixel resolution of the LOWFS camera, which is expected to be between 16 and 64 pixels across the pupil diameter. The pixel resolution does not meaningfully affect formation flying performance, provided the sampling is fine enough to properly resolve the spot structure but not so fine that readout noise overwhelms the signal. The lab camera’s native resolution was about 100 pixels across the pupil, which we digitally interpolated down to 32 for a final output image format of 32×32.

4.2.

Experimental Design

The experimental setup used SNRs and spot sizes (as fractions of the pupil diameter) determined from the expected flight-like values. These are listed in Table 2. The relative scaling between the testbed motion and flight motion (in units of mm/cm) was calculated analytically and verified experimentally.

We experimented with different processing of the camera images but opted to go with a straightforward “image minus dark frame” calibration. This is due to features of the test camera, which included a fixed bias offset, and confounding factors such as background illumination from neighboring laboratories. The dark-subtracted frame was then inputted into the image matching algorithm presented in Sec. 6. While we would have been able to get better performance using more advanced postprocessing like filtering out optical noise, this minimal level of calibration stays close to the flight algorithm. It is also expected that in flight, there will be additional error sources like unstable flat fields and charge traps due to cosmic ray damage, which will not be in the optical model.

In order to build the sensor model, we first needed to create an image library for the lab. We computed the diffraction pattern on the sensor with models of the lab optics, involving the fiber output beam, collimating lens, and miniature starshade. Obviously, sizes and distances are different as well, at 3  m separations and a 6  mm starshade rather than 40,000  km separations and a 26-m starshade. An example of the computed and measured diffraction pattern, before adding the pupil and binning, is shown in Fig. 10. (Fig. 11 shows the 32×32 version with the pupil overlaid.) While our contrast measurements were consistent with 25% of expectation, absolute values of contrast are not relevant because the images are normalized before being matched.

Fig. 10

A comparison of the (a) testbed measurement with the (b) lab simulation on the same logarithmic image intensity scale. The spot of Arago and other diffraction artifacts is clearly visible. Note the optical noise in the lab image.

JATIS_6_1_015003_f010.png

Fig. 11

(a), (b) Image matching from the noisy camera image to the model prediction.

JATIS_6_1_015003_f011.png

We determined the correct laser drive voltage by empirically measuring the SNR of the pixels (after being binned to the LOWFs plate scale) as approximated by SNR = mean/(standard deviation). We matched the empirical SNR to that expected in space from stars fainter than eighth magnitude at a typical spot size. The exact nature of the noise will change between the flight detector and SLATE; the former will be almost purely Poissonian, while the latter includes Poisson, readout, and dark current noise. It would be, in principle, possible to independently characterize the different SLATE detector noise sources, and their combined distributions, but we opted to use the empirical SNR instead. While we did not analyze the exact form of the noise statistics, it is possible that some variation in our results is due to these subtle effects.

While the actuator encoder values could be used for open-loop positioning, they had a slight tilt with respect to the optical axis and some backlash. Rather than trying to calibrate these imperfections, we determined the position of the starshade directly from the camera images and ran an acquisition loop to go to the preset grid points, spaced apart by 30 cm (effective). To minimize errors during acquisition due to optical and detector noise, the laser was turned to a bright level such that the image matching always returned the same result. Then, the laser was turned down to the “science intensity” and hundreds of frames were taken at those flux levels. At each grid point, the images were matched to the library, and the corresponding positions were used to generate error ellipses shown in the next section. An example of a single camera image and its matched model are shown in Fig. 11.

4.3.

Results

From the empirical data covariance matrix at each position, we generate error ellipses showing 1-, 2-, and 3-σ contours. These results were consistent with numerical expectations within a factor of 50%, as shown in Table 3. Plots of the results are presented in Fig. 12. The delivered sensing precision, which was obtained at much lower signal level than expected in flight, is still well within the tolerance specified for formation flight, and a companion work will demonstrate robust control even with errors far larger than what the sensor can deliver.

Table 3

Comparison between numerical simulations and actual testbed performance of the worst and median sensor precision.

Simulation3σ precision, worst6.7 cm
SLATE3σ precision, worst10.2 cm
Simulation3σ precision, median4.0 cm
SLATE3σ precision, median6.2 cm

Fig. 12

Results of (b) sensing precision of SLATE and (a) a model of SLATE at the same flux levels.

JATIS_6_1_015003_f012.png

The primary reason for the worse performance in the lab than from numerical simulation is optical noise, that is, blobs of bright light forming structures in the shadow that are not expected in the CGI flight optical system. This noise is created by light scattered by the imperfect optics. The extra optical noise leads to both statistical and systematic errors. The statistical errors are due to the combination of Poisson, dark, and readout noise; the systematic errors are due to the matching algorithm biasing toward the scattered light structures. (These systematic errors are visible in Fig. 12 as slight shifts in the midpoint of the error ellipses compared to the setpoints.)

Errors in the camera also contribute. The dark level in the camera drifts continuously, and while we always took a background frame before a science frame, the noise on top of these frames can appear as a changing noise gradient from one side of the detector to the other. Another issue was flat-field correction: we did not solve for a flat-field on the camera and thus differences in per-pixel gain can create a spatially dependent systematic error signal. These errors will also be present in flight to some degree, as cosmic ray damage begins to affect the pixels in the detector.

5.

Conclusions

We have presented a lateral sensing scheme appropriate for the challenging task of starshade formation flying, where two spacecraft must be aligned to a precision of 1 m at distances of 20,000 to 80,000 km. The sensing scheme measures the position of the classical Arago spot from light diffracting around the edges of the starshade using an internal pupil sensor on the telescope. This light, which is outside the wavelengths of scientific interest, is bright enough to provide a robust sensing signal. The precision of this sensing scheme is just a few centimeters in shear for star brightnesses of 8 to 10 V magnitudes, which is fainter than any of the expected target stars by factors of >10. The performance of this sensor shows good agreement when compared to analytical calculations, detailed numerical simulations, and laboratory experiments.

No additional hardware is needed to implement this sensor beyond a pupil imager in the telescope. This is already present in the case of the WFIRST coronagraph instrument, where it is used as an “internal” wavefront sensor in coordination with a Zernike spot. In the case of future missions like mDot,22 HabEx,23 or LUVOIR,24 pupil sensors are expected to be present as well. As such, they can support accommodation for future starshade rendezvous missions.

A companion paper (Flinois et al., in preparation)4 will introduce a control scheme that can easily provide enough fidelity to keep the starshade and telescope aligned to the 1 m necessary for imaging extrasolar planets. Such a control scheme has a high level of efficiency and an ability to execute nearly optimal trajectories in the differential gravity of L2, with minimal interruptions in science operations for trajectory correction maneuvers. As such, we have high confidence that the formation flying problem, which was initially considered a major challenge in implementing a starshade, can be solved.

6.

Appendix A: Storage and Computational Requirements for Formation Sensing Algorithm

The baselined flight computer for the coronagraph instrument on WFIRST is the LEON4 processor25 from Cobham Gaisler. Internal testing of the processor reports a maximum performance of 800 MFLOPS, with an “effective” performance (including all low-level overheads) estimated at 76 MFLOPS. The available memory allocated to the coronagraph instrument is about 66 GB.

6.1.

Library Size and Storage Requirements

For formation flying, the control region is 1 m in radius. It is expected the image library size will be larger to accommodate some margin for error and initial acquisition. We assume a 3×3-m library, spaced at 2 cm, for (3  m/2  cm)2=22,500 images. For 32×32 images stored as 16  bits/pixel, we find a total space requirement of about 22,500×32×32  pixels×16  bits/pixel=368  Mb=46  MB. Assuming one library is used for each of the three science bands, this leads to a total space requirement of about 150 MB for formation flying or about 0.2% of the total storage space allocation of 66 GB.

6.2.

Computational Requirements

Here, we describe a brute-force implementation of the image matching algorithm. Let n be the number of pixels used in the pupil sensor; for example, n=1024 for the 32×32 format considered earlier. Let m be the number of images in the library. Table 4 outlines the steps in the shear-sensing algorithm with their associated floating point cost.

To convert to FLOPS, we assume each low-level mathematical operation (addition, multiplication, etc.) costs 1 FLOP. For the 22,500 images per library, a full search each second would cost about 70 MFLOPS, which is just within the capabilities of the LEON4. However, after the initial position fix (using perhaps a much coarser library spaced at 10 cm), a search over the entire image library will not be necessary, since the starshade does not move fast. With a maximum expected speed of 2  cm/s, in 1 s of motion only about 25 images would need to be searched. Being more conservative, one could search a 10 cm radius around the previous fix with a sublibrary with less than 100 images, which would consume <0.5 MFLOPS or about 1% of the 76 MFLOPS capabilities of the LEON4. The entire calculation would take about 6 ms, contributing 0.6% to the 1-s sensing cadence.

Table 4

Floating point cost for shear-sensing algorithm.

OperationAlgebraic descriptionFloating-point operations
Sum raw image intensitiesIn1
Calculate the mean intensityI=I/n1
Divide a raw image by mean intensityIμ=I/In
Subtract the result from each image library imagedx,y=Lx,yIμnm
Square resultdx,ydx,y2nm
Sum resultsex,y2=dx,y2(n1)m
Find minimum error using binary searchmin[ex,y2]m1
Total cost3nm+2n+1

Acknowledgments

The authors wish to thank the referees for constructive reports that improved the quality of this paper. MB thanks Kendra Short, Phil Willems, Doug Lisman, and Charley Noecker for insights and guidance. The authors thank Opto-Line International, Inc., for workmanship and timely delivery of our mini starshades. This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Disclosures: The authors declare no financial or other conflicts of interest in this publication.

References

1. 

S. Seager et al., “Starshade rendezvous probe,” (2019) https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20190028272.pdf Google Scholar

2. 

B. S. Gaudi et al., “The habitable exoplanet observatory (HabEx),” Proc. SPIE, 11115 111150M (2019). https://doi.org/10.1117/12.2530036 PSISDG 0277-786X Google Scholar

3. 

W. Cash, “Analytic modeling of starshades,” Astrophys. J., 738 (1), 76 (2011). https://doi.org/10.1088/0004-637X/738/1/76 ASJOAB 0004-637X Google Scholar

4. 

T. L. B. Flinois et al., “Starshade formation flying II: formation control,” J. Astron. Telesc. Instrum. Syst., Google Scholar

5. 

D. Webb et al., “Successful starshade petal deployment tolerance verification in support of nasa’s technology development for exoplanet missions,” Proc. SPIE, 9151 91511P (2014). https://doi.org/10.1117/12.2057258 PSISDG 0277-786X Google Scholar

6. 

C. C. Liebe, “Star trackers for attitude determination,” IEEE Aerosp. Electron. Syst. Mag., 10 (6), 10 –16 (1995). https://doi.org/10.1109/62.387971 IESMEA 0885-8985 Google Scholar

7. 

D. P. Scharf et al., “Precision formation flying at megameter separations for exoplanet characterization,” Acta Astronaut., 123 420 –434 (2016). https://doi.org/10.1016/j.actaastro.2015.12.044 AASTCF 0094-5765 Google Scholar

8. 

P. Willems, “Starshade to TRL5 (S5) technology development plan,” Pasadena, California (2018). Google Scholar

9. 

M. C. Noecker, “Alignment of a terrestrial planet finder starshade at 20-100 megameters,” Proc. SPIE, 6693 669306 (2007). https://doi.org/10.1117/12.736053 PSISDG 0277-786X Google Scholar

10. 

D. Sirbu and N. J. Kasdin, “Formation flight for a telescope-occulter mission with navigation and sensor noise,” in Proc. 4th Int. Conf. Spacecr. Form. Flying Missions, (2011). Google Scholar

11. 

A. Harness and W. Cash, “Enabling formation flying of star shades for the search of earth-like exoplanets,” in 8th Int. Workshop Satell. Constell. and Form. Flying, (2015). Google Scholar

12. 

M. Bottom et al., “Precise starshade stationkeeping and pointing with a Zernike wavefront sensor,” Proc. SPIE, 10400 104001B (2017). https://doi.org/10.1117/12.2274086 PSISDG 0277-786X Google Scholar

13. 

T. Flinois et al., “S5: starshade technology to TRL5 milestone 4 final report: lateral formation sensing and control,” (2018) https://exoplanets.nasa.gov/internal_resources/1141/ Google Scholar

14. 

F. Castelli et al., “Modelling of stellar atmospheres, new grids of atlas9 model atmospheres,” (2003). Google Scholar

15. 

H. Tang et al., “The WFIRST coronagraph instrument optical design update,” Proc. SPIE, 10400 1040003 (2017). https://doi.org/10.1117/12.2274549 PSISDG 0277-786X Google Scholar

16. 

P. Morrissey et al., “Photon counting EMCCD developments for the WFIRST coronagraph (conference presentation),” Proc. SPIE, 10709 107090B (2018). https://doi.org/10.1117/12.2309387 PSISDG 0277-786X Google Scholar

17. 

I. R. King, “Accuracy of measurement of star images on a pixel array,” Publ. Astron. Soc. Pac., 95 (564), 163 (1983). https://doi.org/10.1086/131139 PASPAU 0004-6280 Google Scholar

18. 

N. Kaiser, J. Tonry and G. Luppino, “A new strategy for deep wide-field high-resolution optical imaging,” Pub. Astron. Soc. Pac., 112 (772), 768 –800 (2000). https://doi.org/10.1086/pasp.2000.112.issue-772 PASPAU 0004-6280 Google Scholar

19. 

E. Cady, “Boundary diffraction wave integrals for diffraction modeling of external occulters,” Opt. Express, 20 (14), 15196 –15208 (2012). https://doi.org/10.1364/OE.20.015196 OPEXFF 1094-4087 Google Scholar

20. 

M. M. Hu et al., “Simulation of realistic images for Starshade missions,” Proc. SPIE, 10400 104001S (2017). https://doi.org/10.1117/12.2273404 PSISDG 0277-786X Google Scholar

21. 

Y. Kim et al., “Design of a laboratory testbed for external occulters at flight Fresnel numbers,” Proc. SPIE, 9605 960511 (2015). https://doi.org/10.1117/12.2186349 PSISDG 0277-786X Google Scholar

22. 

S. D’Amico et al., “System design of the miniaturized distributed occulter/telescope (mdot) science mission,” in 33rd Annu. Conf. Small Satell., (2019). Google Scholar

23. 

B. Mennesson et al., “The habitable exoplanet (HabEx) imaging mission: preliminary science drivers and technical requirements,” Proc. SPIE, 9904 99040L (2016). https://doi.org/10.1117/12.2240457 PSISDG 0277-786X Google Scholar

24. 

“The Luvoir mission concept study interim report,” (2018). Google Scholar

25. 

J. Andersson et al., “LEON processor devices for space missions: first 20 years of LEON in space,” in 6th Int. Conf. Space Mission Challenges for Inf. Technol. (SMC-IT), 136 –141 (2017). https://doi.org/10.1109/SMC-IT.2017.31 Google Scholar

Biographies of the authors are not available.

© 2020 Society of Photo-Optical Instrumentation Engineers (SPIE) 2329-4124/2020/$28.00 © 2020 SPIE
Michael Bottom, Stefan Martin, Eric Cady, Megan C. Davis, Thibault Flinois, Dan Scharf, Carl Seubert, Shannon K. Zareh, and Stuart Shaklan "Starshade formation flying I: optical sensing," Journal of Astronomical Telescopes, Instruments, and Systems 6(1), 015003 (3 February 2020). https://doi.org/10.1117/1.JATIS.6.1.015003
Received: 29 May 2019; Accepted: 2 January 2020; Published: 3 February 2020
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Telescopes

Cameras

Sensors

Optical sensing

Stars

Signal to noise ratio

Numerical simulations

Back to Top