Telescopes employing linear detector arrays in a push-broom configuration enable the reconstruction of two-dimensional images of the Earth by recombining successive one-dimensional captures. This configuration, which typically features a wide field of view in the across-track direction but a narrow one in the along-track direction, often suffers from stray light, which degrades optical quality by introducing artifacts into the images. With increasingly stringent performance requirements, there is a critical need to implement effective stray light (SL) correction algorithms in addition to control by design. We describe the development of such an algorithm, using the cloud imager (CLIM) linear detector array instrument as a case study. Our approach involves calibrating SL kernels obtained by illuminating the instrument with a point-like source from various angles. In the along-track direction, we interpolate the SL kernel for any field angle without initial assumptions about SL behavior. For the across-track direction, we employ a local shift variant assumption. When applied to images of a checkerboard scene, which includes transitions between bright and dark areas, our algorithm successfully reduces SL by two orders of magnitude, demonstrating its efficacy and potential for broader application in telescopes with linear detector array. |
1.IntroductionIn remote sensing, push-broom cameras can generate two-dimensional (2D) images of the Earth by combining successive one-dimensional (1D) images captured by a linear array detector1 (Fig. 1). In such configurations, the camera’s field of view is large in the satellite’s across-track (ACT) direction and small in the along-track (ALT) direction. The linear array detector is parallel to the ACT direction. During the time between two acquisitions, the instrument rotates in the ALT direction by the equivalent of the pixel’s instantaneous field of view in the ACT direction. The integration time is ideally equal to , or slightly smaller, giving minimal information loss. Off-axis three-mirror anastigmats are ideal optical configurations for such instruments, as they provide a large field of view in a single direction.2,3 They are used, for example, in ProbaV4 and Sentinel-2 MSI.5 Off-axis configurations with four mirrors add an extra degree of freedom and are used, for example, in Landsat 8’s Operational Land Imager.6 Currently in development, the cloud imager (CLIM) instrument from the CO2 monitoring (CO2M) mission7,8 uses a similar optical design to ProbaV but with a reduced field of view of . With three linear array detectors, each with a spectral filter on top, it enables the reconstruction of a 2D image of the Earth at three different wavelength channels (670, 753, and 1370 nm). Stray light (SL) poses a significant challenge for Earth observation applications,9–12 including those employing a push-broom camera. As SL affects image quality and radiometric accuracy, it must be controlled through appropriate opto-mechanical design and material selection.13,14 In a three-mirror anastigmat (TMA) such as CLIM, baffles, apertures, and light traps are strategically placed to mitigate SL15 [Fig. 2(a)]. Spectral rejection is achieved by utilizing a narrow band spectral filter atop a glass slab, complemented by a broadband filter on the bottom surface [Fig. 2(b)]. Spectral crosstalk is prevented using black masks. However, ghost reflections are present due to partial reflections at glass interfaces and on the detector surface. Figure 2(c) depicts the SL pattern on the linear array detector at 670 nm, predicted by ray tracing when the central pixel is illuminated by a point-like source. Here, SL is normalized to a nominal signal of 1. The SL diminishes gradually away from due to scattering effects, whereas ghost reflections are concentrated around . Moreover, it shows that the reflection on the detector increases the number of ghosts compared with the case of a perfectly absorbing detector. A worst-case assumption of 20% reflectivity at the detector is used, though the actual reflectivity likely falls between that of a fully absorbing detector and 20%. When the SL requirement is very strict, control by design can be insufficient, necessitating additional SL correction by post-processing.11 In CLIM, the requirement is defined based on the checkerboard scene illustrated in Fig. 1, with transitions between areas of bright and dark radiances. Extended scenes with a transition between bright and dark areas is a common way of specifying SL in Earth observation, as it reproduces large extended areas of lands, oceans, or clouds.11,12 For mission success, SL on any pixel should not exceed 2% of the measured radiance, except for those within 20 pixels from a transition. At the 1370-nm channel, however, this distance is 10 pixels as its linear array detector uses pixels twice bigger than in the visible spectral range. Based on ray tracing simulations, it is expected that a correction algorithm reducing the SL by at least one order of magnitude is necessary. In the past, SL correction by post-processing was mostly limited to deconvolution.16–18 Recently, the demand for higher-performing instruments has driven the development of advanced correction methods.19–21 In Metop-3MI, a remarkable correction factor of two orders of magnitude has been reported, using a matrix method where SL kernels are modulated by the signal coming from various positions within the field of view.11,22,23 With a square field of view of up to , SL kernels in that instrument have been measured with a dynamic range of , and their dependence within the field of view was interpolated using a local symmetry assumption.11 In this paper, we describe a correction approach for the specific case of push-broom cameras with a linear detector array, with the CLIM instrument as a case study. We describe the calibration method and a direct approach for interpolation of SL kernels. The algorithm’s performance is demonstrated using ray-traced data at 670 nm, assuming, without loss of generality, a rotationally symmetrical dependence of its SL properties. 2.SL Contribution to the Image for a Given SceneOn a given line of the recombined 2D image, the measured signal is the sum of a nominal contribution, , originating from the image-forming beam and a SL contribution, [Eq. (1)]. The SL can be decomposed into spatial components, in-field SL () and out-of-field SL (), and a spectral component, [Eq. (2)]. The latter represents SL originating from wavelengths different from the one being considered in the channel. In CLIM, thanks to the optimized spectral filter architecture. Spatial components arise from SL at the wavelength of the considered channel, caused by illumination at various field angles. In CLIM, SL occurs when the illumination angle in the ALT direction is within about ± 10 deg of the angle corresponding to the direct illumination of the linear array detector. In the ACT direction, SL from illumination within the instrument field of view () is considered in-field (), whereas for illumination outside that range, it is considered out-of-field () In CLIM, out-of-field SL is below the performance requirement and is therefore neglected. We only consider the component , which originates from any field angle illuminating pixels such that , with being the number of pixels on the detector, and . Here, , with representing the instantaneous field of view of the pixel. The distance , expressed in pixels, corresponds to the ALT angular limit of .We define the SL kernel, , as the SL on pixel when the instrument observes a point-like source illuminating pixel when the instrument observes a point-like source illuminating pixel on the recombined image, such that the linear detector array is directly illuminated when and giving a nominal signal of 1. Here, SPST stands for Spatial Point Source Transmittance. With that definition, the SL contribution to the measured scene is expressed by Eq. (3), representing a sum over the field of the SL kernel modulated by the nominal signal at the corresponding field. The factor accounts for energy conservation when the integration time is not equal to the time interval between two successive lines 3.SL Correction PrincipleThe SL correction of an image is performed one line at a time. First, an estimation of the SL component is obtained with Eq. (4), which is analogous to Eq. (3) except that the modulation of the kernel is done with the measured signal. Then, subtracting this result from the measured signal gives an estimate of the corrected signal [Eq. (5)]. Performing this operation for each line provides the estimated 2D corrected image The corrected image contains a second-order residual SL error because the SL kernels are modulated by the measured signal, which itself contains SL. This error may or may not be negligible, depending on the initial SL level and the desired performance. A better estimation of the corrected image can be obtained by performing one or more iterations of the correction, using the above equations while replacing in Eq. (4) with . Faster convergence can be achieved by replacing with in Eq. (4) at each subsequent line correction, instead of after the full 2D image is corrected. The first approach follows a Jacobi convergence, whereas the second follows a Gauss–Seidel convergence, which is typically twice as fast.4.SL Kernel Measurements and Processing4.1.Calibration GridSL kernels are measured over a calibration grid using point-like source illumination at the corresponding field angles. The matrix is measured by illuminating the instrument at a fixed value of and varying . For each value of , a line of the matrix is obtained along the direction. This process is repeated for various values of , effectively providing the various kernels . A calibration grid with and in steps of 1 would result in an unrealistically long measurement campaign. Therefore, a restricted calibration grid is considered, and interpolation will be performed to provide the kernels over the full-resolution grid. For CLIM, SL has a smooth behavior along but shows rapid variations in the near-nominal region due to localized ghosts. For adequate sampling, a grid with one-pixel steps is selected in the range . Beyond this range, larger steps can be used: . In the direction, SL variation is mostly a shift, as the instrument is quasi-telecentric. Therefore, a grid with 200 pixels steps is selected. Finally, the full calibration grid is shown in Fig. 3. 4.2.Dynamic Range DecompositionThe SL kernel evolves over a large dynamic range that cannot be resolved by the detector in a single acquisition. Therefore, at each field angle, the signal is acquired at various input power levels or integration times, effectively creating a dynamic range decomposition. Acquisitions with low power or short integration times enable the measurement of the highest values of the SL, whereas the lowest signals remain within the noise. Increasing the input power or integration time saturates the highest signals but brings the lowest signals above the noise. The different acquisitions are then recombined by normalizing them to integration time and input power, then retaining for each pixel the highest non-saturated signal. This process requires excellent accuracy in the monitoring system. Alternatively, a stitching-based recombination can be implemented, provided there is adequate overlap among levels. Close to the nominal field angle, a high-density grid is only necessary for measuring the rapidly varying ghosts. Therefore, to reduce the campaign duration, calibrations at are performed with a single dynamic level, using the lowest input power and integration time to resolve the ghosts while leaving the scattering in the noise. The noise signal is then artificially removed from the measurement for those fields. 4.3.SPST Map NormalizationSL kernels must be normalized to the nominal signal. If the image-forming beam produces a sub-pixel nominal image at the detector, the measurements are simply normalized to the signal at pixel coordinate . If the nominal image is spread over more than one pixel, due to aberrations or the test collimator’s angular extent, the nominal signal is obtained by summing the signal across those pixels. In CLIM, we consider the nominal signal as extended over . Finally, the signal on nominal pixels is removed from the measurement to retain only the SL. Figure 4(a) illustrates the results of these measurements and processing, simulated by ray tracing and confirming the adequate sampling of the SL variations. 4.4.Interpolation Along yf (ALT)Interpolation is performed to obtain the matrix with a resolution of one pixel along . A simple linear interpolation is applied to each individual column of the kernel. Figure 4(b) illustrates the interpolation results for two values of , whereas Fig. 4(c) shows the complete 2D map resulting from the interpolation, with one pixel sampling in both dimensions. For comparison, Fig. 4(d) shows the interpolation error. This method makes no a priori assumptions about the SL profile; the only requirement for its efficiency is sufficient sampling along . This is a significant advantage compared with the interpolation method used for Metop-3MI, which relied on a local symmetry assumption. Alternatively, fitting could be employed in smoothly scattered regions, for example, using Harvey or ABg models.24–26 4.5.Interpolation Along xf (ACT)At this stage, the matrices are known in full resolution for each calibrated value of . Interpolation is then performed to derive the kernels for all values of between 1 and , with steps of one pixel. A local shift invariance assumption is made, which is reasonable for a quasi-telecentric instrument such as CLIM. For a given value , the matrix is deduced by searching for the closest neighboring calibrated fields, . A shift along of the associated matrix is performed to obtain the interpolated kernel: . This process results in missing signals on one of the edges along , which is filled by applying the equivalent transformation to the kernel associated with the second closest neighbor, , located on the other side. This process is illustrated in Fig. 5(a). Finally, this process provides full-resolution SL kernels, , each with dimension . 4.6.SPST Map BinningSpatial or field binning can be applied to the SL kernels to reduce the quantity of data, thereby decreasing the SL correction computation time. Spatial binning is done by lowering the SL kernel resolution along the coordinates, by a factor . Field binning in the ALT direction is done by lowering the SL kernel resolution along the coordinates, by a factor . Field binning in the ACT direction is done by reducing the number of kernels by averaging the maps associated with neighboring fields, by groups of . Applying spatial binning implies that the estimated SL () is obtained at a lower resolution too. During SL correction, a simple linear interpolation restores it to full resolution before it is subtracted from the measured image, though introducing a residual error. On the other side, field binning averages the signal from neighboring fields, which is equivalent to estimating the SL for an input scene that has been spatially binned, thereby smoothing high frequencies. Therefore, binning contributes to a residual SL correction error, particularly affecting SL in transitions between bright and dark areas of a scene. For CLIM, no spatial binning is applied, but a field binning is applied with , which corresponds to the width of the performance requirement zone. This provides the optimal data reduction while limiting the effect on the SL correction performance. 5.SL Correction PerformanceSL correction performance is assessed by applying the algorithm on the image of the reference extended scene, shown in Fig. 1. This is done with ray tracing data, considering the assumption of an absorbing detector, or the worst-case scenario of a reflective detector. In both cases, the theoretical SL is computed with Eq. (3), and the correction method follows all the steps described in the previous sections. Figure 6(a) shows the estimated SL associated with the reference scene, obtained using the correction algorithm with enough iterations to reach convergence. Figure 6(b) displays the residual SL error, obtained by subtracting the estimated SL from the measured image. Figure 6(c) presents the profiles along for the nominal signal, initial SL, and residual SL. Specifically, the residual SL is shown for a single iteration and at convergence, which is reached with two or more iterations. The SL reduction factor is on average 1/25 with a single iteration, whereas a factor of is achieved at convergence. Overlaying the performance requirement on the graph shows that a single iteration suffices to reach a satisfactory residual SL level. Interestingly, there is a larger error at the transition zone, which arises from the field binning effect. With an appropriate selection of the field binning factor, this error is correctly restricted to the transition area not included in the performance requirement. Figures 6(d)–6(f) show the analogous results for the case of a reflective detector. In this scenario, the initial SL level is larger due to additional ghosts. Consequently, two iterations are necessary to bring the residual SL below the performance requirement. Moreover, the convergence is reached with at least three iterations. In practice, the real properties of the detector are somewhere in between these two scenarios. 6.ACT Out-of-Field SLSL coming from outside the field of view in the ACT direction, , has been neglected because its level is much lower than . Moreover, correcting for this contributor is challenging as the nominal signal from such angles is unknown. For Metop-3MI, out-of-field SL kernels are measured, and the in-field image is mirrored to estimate the out-of-field input radiance.27 This method works well for scenes with large uniform features; however, it cannot predict localized objects such as isolated clouds. Following a similar approach, an algorithm correction of both and could be built for CLIM. is calibrated for out-of-field ACT angles, considering from to with the limit for non-zero out-of-field SL. Here, we link the out-of-field pixel coordinate with the field angle by extrapolating the distortion curve. Because there is no nominal illumination for OOF SPST maps, the normalization is done to the monitoring then to the nominal signal of the closest in-field SPST map. Next, the image going from pixel to is obtained by an ACT mirroring of . Finally, the estimated SL is obtained with Eq. (6) instead of Eq. (4) 7.ConclusionsIn this paper, we presented an SL correction approach for space telescopes equipped with linear detector arrays in a push-broom configuration, using the CLIM instrument as a case study. Our approach successfully reduces the SL level by two orders of magnitude, effectively fulfilling the users’ requirements. The methodology involves estimating the SL pattern associated with an input scene by modulating SL kernels corresponding to point-like source illumination from various field angles. The SL kernels consist of two main components: ghost patterns near the nominal area and a smooth scattering effect in the long-range area. Due to their large dynamic range, measuring these kernels requires multi-level acquisitions, capturing the signal at various input powers or integration times. Initially calibrated over a specific grid, the kernels are then derived for any field angle illumination through interpolation. In the along-track direction, the method employs simple linear interpolation, making no initial assumptions about the SL properties. The primary requirement is a sufficiently dense calibration grid, particularly near the nominal area, to ensure that rapidly varying ghost features are well sampled. For the across-track directions, we apply a local shift assumption in our interpolation strategy, resulting in a comprehensive database of SL kernels. Ultimately, spatial and field binning is performed, significantly reducing both data volume and computation time. We evaluated the algorithm’s efficacy on a checkerboard scene, which featured transitions between areas of bright and dark radiance. Two scenarios were considered: in the first, we assumed an absorbing detector, whereas in the second, we accounted for a larger SL with the assumption of a reflective detector. In the absorbing detector scenario, one iteration of the correction algorithm sufficed to meet the user’s performance requirements, and convergence was achieved in two iterations. For the reflective detector, two iterations were required to meet the performance requirements, with convergence achieved after three iterations. Finally, we extended our algorithm to consider out-of-field SL in the across-track direction. For that, we use a mirroring technique to deduce the scene in the out-of-field area, as employed for the case of the Metop-3MI instrument. This combined approach provides an effective SL correction algorithm able to improve the SL of linear detector array instruments in push-broom configuration, considering all kinds of SL present in the instrument. ReferencesS. Liang and J. Wang, Advanced Remote Sensing: Terrestrial Information Extraction and Applications, 2nd Ed.Elsevier,
(2019). Google Scholar
L. G. Cook,
“Three-mirror anastigmat used off-axis in aperture and field,”
Proc. SPIE, 0183 207
–211 https://doi.org/10.1117/12.957416
(1979).
Google Scholar
G. E. Romanova and K. D. Rodionova (Butylkina),
“Design and analysis of the mirror system with off-axis field-of-view,”
Proc. SPIE, 10745 1074514 https://doi.org/10.1117/12.2321288
(2018).
Google Scholar
S. Grabarnik et al.,
“Compact multispectral and hyperspectral imagers based on a wide field of view TMA,”
Proc. SPIE, 10565 1056505 https://doi.org/10.1117/12.2309101
(2017).
Google Scholar
V. Cazaubiel, V. Chorvalli and C. Miesch,
“The multispectral instrument of the Sentinel2 program,”
Proc. SPIE, 10566 105660H https://doi.org/10.1117/12.2308278
(2017).
Google Scholar
E. J. Knight and G. Kvaran,
“Landsat-8 operational land imager design, characterization and performance,”
Remote Sensing, 6
(11), 10286
–10305 https://doi.org/10.3390/rs61110286
(2014).
Google Scholar
Y. Durand et al.,
“Copernicus CO2M mission for monitoring anthropogenic carbon dioxide emissions from space: payload status,”
Proc. SPIE, 12264 1226405 https://doi.org/10.1117/12.2636158
(2022).
Google Scholar
Y. Durand et al.,
“Status on the development of the Copernicus CO2M mission: monitoring anthropogenic carbon dioxide from space,”
Proc. SPIE, 12729 127290V https://doi.org/10.1117/12.2684820
(2023).
Google Scholar
M. Talone et al.,
“Stray light effects in above-water remote-sensing reflectance from hyperspectral radiometers,”
Appl. Opt., 55 3966
–3977 https://doi.org/10.1364/AO.55.003966
(2016).
Google Scholar
S. W. Brown et al.,
“Stray light and ocean-color remote sensing,”
4521
–4524 https://doi.org/10.1109/IGARSS.2003.1295567
(2003).
Google Scholar
L. Clermont, C. Michel and Y. Stockman,
“Stray light correction algorithm for high performance optical instruments: the case of Metop-3MI,”
Remote Sens., 14 1354 https://doi.org/10.3390/rs14061354
(2022).
Google Scholar
V. Kirschner, Stray Light Analysis and Minimization, European Space Agency, SOIDT, Space Optics Instrument Design & Technology, Noordwijk, The Netherlands
(20172023). Google Scholar
E. Fest,
“Stray Light Analysis and Control,”
SPIE Press, Bellingham, WA, USA
(2013). Google Scholar
R. Breault, Handbook of Optics, 1 38.1
–38.35 McGraw-Hill, New York, NY
(1995). Google Scholar
L. Clermont and L. Aballea,
“Stray light control and analysis for an off-axis three-mirror anastigmat telescope,”
Opt. Eng., 60
(5), 055106 https://doi.org/10.1117/1.OE.60.5.055106
(2021).
Google Scholar
A. Yeo et al.,
“Point spread function of SDO/HMI and the effects of stray light correction on the apparent properties of solar surface phenomena,”
Astron. Astrophys., 561 A22 https://doi.org/10.1051/0004-6361/201322502
(2014).
Google Scholar
W. H. Richardson,
“Bayesian-based iterative method of image restoration,”
J. Opt. Soc. Am., 62 55
(1972).
Google Scholar
L. B. Lucy,
“An iterative technique for the rectification of observed distributions,”
Astron. J., 79 745
(1974).
Google Scholar
P. A. Jansson, B. Bitlis and J. P. Allebach, Correcting Color Images for Stray-Light Effects by Computationally Solving an Inverse Problem via Selected-Ordinate Image (SORI) Processing, Technical Digest Optical Society of AmericaWashington DC, USA
(2005). Google Scholar
B. Bitlis, P. A. Jansson and J. P. Allebach,
“Parametric point spread function modeling and reduction of stray light effects in digital still cameras,”
Proc. SPIE, 6498 64980V https://doi.org/10.1117/12.715101
(2007).
Google Scholar
Y. Zong et al.,
“Simple spectral stray light correction method for array spectroradiometers,”
Appl. Opt., 45 1111
–1119 https://doi.org/10.1364/AO.45.001111
(2006).
Google Scholar
L. Clermont et al.,
“Going beyond hardware limitations with advanced stray light calibration for the Metop-3MI space instrument,”
Sci. Rep., 14 19490 https://doi.org/10.21203/rs.3.rs-4477759/v1
(2024).
Google Scholar
L. Clermont et al.,
“Stray-light calibration and correction for the MetOp-SG 3MI mission,”
Proc. SPIE, 10704 1070406 https://doi.org/10.1117/12.2314208
(2018).
Google Scholar
E. Fest,
“Stray Light Analysis and Control,”
SPIE Press, Bellingham, WA, USA
(2013). https://doi.org/10.1117/3.1000980 Google Scholar
J. C. Stover, Optical Scattering: Measurement and Analysis, Third Ed.,SPIE Press, Bellingham, WA, USA
(2012). https://doi.org/10.1117/3.975276 Google Scholar
J. E. Harvey, Understanding Surface Scatter: a Linear Systems Formulation, SPIE Press, Bellingham, WA, USA
(2019). https://doi.org/10.1117/3.2530114 Google Scholar
L. Clermont and C. Michel,
“Out-of-field stray light correction in optical instruments: the case of Metop-3MI,”
J. Appl. Remote Sens., 18
(1), 016508 https://doi.org/10.1117/1.JRS.18.016508
(2024).
Google Scholar
BiographyLionel Clermont is a senior optical engineer with expertise in space instrumentation and stray light. Among his experiences, he was responsible for the stray light calibration and correction for the Earth observation instrument Metop-3MI. He is also a pioneer in the development of the time-of-flight (ToF) stray light characterization method and received awards such as the Early Career Achievement award of the International Society for Optics and Photonics (SPIE). |