Open Access
4 December 2024 Stray light correction method for telescopes with linear detector arrays in push-broom configuration: application to CO2M-CLIM
Author Affiliations +
Abstract

Telescopes employing linear detector arrays in a push-broom configuration enable the reconstruction of two-dimensional images of the Earth by recombining successive one-dimensional captures. This configuration, which typically features a wide field of view in the across-track direction but a narrow one in the along-track direction, often suffers from stray light, which degrades optical quality by introducing artifacts into the images. With increasingly stringent performance requirements, there is a critical need to implement effective stray light (SL) correction algorithms in addition to control by design. We describe the development of such an algorithm, using the cloud imager (CLIM) linear detector array instrument as a case study. Our approach involves calibrating SL kernels obtained by illuminating the instrument with a point-like source from various angles. In the along-track direction, we interpolate the SL kernel for any field angle without initial assumptions about SL behavior. For the across-track direction, we employ a local shift variant assumption. When applied to images of a checkerboard scene, which includes transitions between bright and dark areas, our algorithm successfully reduces SL by two orders of magnitude, demonstrating its efficacy and potential for broader application in telescopes with linear detector array.

1.

Introduction

In remote sensing, push-broom cameras can generate two-dimensional (2D) images of the Earth by combining successive one-dimensional (1D) images captured by a linear array detector1 (Fig. 1). In such configurations, the camera’s field of view is large in the satellite’s across-track (ACT) direction and small in the along-track (ALT) direction. The linear array detector is parallel to the ACT direction. During the time dt between two acquisitions, the instrument rotates in the ALT direction by the equivalent of the pixel’s instantaneous field of view in the ACT direction. The integration time tint is ideally equal to dt, or slightly smaller, giving minimal information loss.

Fig. 1

(a) Acquisition of successive 1D images in a push broom configuration. (b) 2D reconstructed image for a checkerboard extended scene.

OE_63_12_124101_f001.png

Off-axis three-mirror anastigmats are ideal optical configurations for such instruments, as they provide a large field of view in a single direction.2,3 They are used, for example, in ProbaV4 and Sentinel-2 MSI.5 Off-axis configurations with four mirrors add an extra degree of freedom and are used, for example, in Landsat 8’s Operational Land Imager.6 Currently in development, the cloud imager (CLIM) instrument from the CO2 monitoring (CO2M) mission7,8 uses a similar optical design to ProbaV but with a reduced field of view of fx=±12.8  deg. With three linear array detectors, each with a spectral filter on top, it enables the reconstruction of a 2D image of the Earth at three different wavelength channels (670, 753, and 1370 nm).

Stray light (SL) poses a significant challenge for Earth observation applications,912 including those employing a push-broom camera. As SL affects image quality and radiometric accuracy, it must be controlled through appropriate opto-mechanical design and material selection.13,14 In a three-mirror anastigmat (TMA) such as CLIM, baffles, apertures, and light traps are strategically placed to mitigate SL15 [Fig. 2(a)]. Spectral rejection is achieved by utilizing a narrow band spectral filter atop a glass slab, complemented by a broadband filter on the bottom surface [Fig. 2(b)]. Spectral crosstalk is prevented using black masks. However, ghost reflections are present due to partial reflections at glass interfaces and on the detector surface. Figure 2(c) depicts the SL pattern on the linear array detector at 670 nm, predicted by ray tracing when the central pixel x0 is illuminated by a point-like source. Here, SL is normalized to a nominal signal of 1. The SL diminishes gradually away from x0 due to scattering effects, whereas ghost reflections are concentrated around x0. Moreover, it shows that the reflection on the detector increases the number of ghosts compared with the case of a perfectly absorbing detector. A worst-case assumption of 20% reflectivity at the detector is used, though the actual reflectivity likely falls between that of a fully absorbing detector and 20%.

Fig. 2

(a) Opto-mechanical design of the CLIM instrument. (b) Spectral rejection architecture. (c) SL kernel predicted by ray tracing at 670 nm. (d) and (e) Estimated SL associated with the checkerboard reference extended scene, computed with the SL kernel assumed rotationally symmetrical at 670 nm.

OE_63_12_124101_f002.png

When the SL requirement is very strict, control by design can be insufficient, necessitating additional SL correction by post-processing.11 In CLIM, the requirement is defined based on the checkerboard scene illustrated in Fig. 1, with transitions between areas of bright and dark radiances. Extended scenes with a transition between bright and dark areas is a common way of specifying SL in Earth observation, as it reproduces large extended areas of lands, oceans, or clouds.11,12 For mission success, SL on any pixel should not exceed 2% of the measured radiance, except for those within 20 pixels from a transition. At the 1370-nm channel, however, this distance is 10 pixels as its linear array detector uses pixels twice bigger than in the visible spectral range. Based on ray tracing simulations, it is expected that a correction algorithm reducing the SL by at least one order of magnitude is necessary.

In the past, SL correction by post-processing was mostly limited to deconvolution.1618 Recently, the demand for higher-performing instruments has driven the development of advanced correction methods.1921 In Metop-3MI, a remarkable correction factor of two orders of magnitude has been reported, using a matrix method where SL kernels are modulated by the signal coming from various positions within the field of view.11,22,23 With a square field of view of up to ±57  deg, SL kernels in that instrument have been measured with a dynamic range of 108, and their dependence within the field of view was interpolated using a local symmetry assumption.11 In this paper, we describe a correction approach for the specific case of push-broom cameras with a linear detector array, with the CLIM instrument as a case study. We describe the calibration method and a direct approach for interpolation of SL kernels. The algorithm’s performance is demonstrated using ray-traced data at 670 nm, assuming, without loss of generality, a rotationally symmetrical dependence of its SL properties.

2.

SL Contribution to the Image for a Given Scene

On a given line y of the recombined 2D image, the measured signal Imes(x) is the sum of a nominal contribution, Inom, originating from the image-forming beam and a SL contribution, ISL [Eq. (1)]. The SL can be decomposed into spatial components, in-field SL (ISLIF) and out-of-field SL (ISLOOF), and a spectral component, ISLspectral [Eq. (2)]. The latter represents SL originating from wavelengths different from the one being considered in the channel. In CLIM, ISLspectral=0 thanks to the optimized spectral filter architecture. Spatial components arise from SL at the wavelength of the considered channel, caused by illumination at various field angles. In CLIM, SL occurs when the illumination angle in the ALT direction is within about ± 10 deg of the angle corresponding to the direct illumination of the linear array detector. In the ACT direction, SL from illumination within the instrument field of view (|fx|12.8  deg) is considered in-field (ISLIF), whereas for illumination outside that range, it is considered out-of-field (ISLOOF)

Eq. (1)

Imes(x,y)=Inom(x,y)+ISL(x,y),

Eq. (2)

ISL(x,y)=ISLIF(x,y)+ISL  OOF(x,y)+ISLspectral(x,y).
In CLIM, out-of-field SL is below the performance requirement and is therefore neglected. We only consider the component ISLIF(x,y), which originates from any field angle illuminating pixels (xf,y+yf) such that xf[1:N], with N=3800 being the number of pixels on the detector, and yf[Δy:Δy]. Here, Δy=10°iFOV1500, with iFOV representing the instantaneous field of view of the pixel. The distance Δy, expressed in pixels, corresponds to the ALT angular limit of ±10  deg.

We define the SL kernel, SPSTxf(x,yf), as the SL on pixel x when the instrument observes a point-like source illuminating pixel x when the instrument observes a point-like source illuminating pixel (xf,yf) on the recombined image, such that the linear detector array is directly illuminated when yf=0 and giving a nominal signal of 1. Here, SPST stands for Spatial Point Source Transmittance. With that definition, the SL contribution to the measured scene is expressed by Eq. (3), representing a sum over the field of the SL kernel modulated by the nominal signal at the corresponding field. The factor dt/tint accounts for energy conservation when the integration time is not equal to the time interval between two successive lines

Eq. (3)

ISLIF(x,y)=dttint·xf=1:Nyf=Δy:ΔySPSTxf(x,yf)·Inom(xf,y+yf).

3.

SL Correction Principle

The SL correction of an image is performed one line at a time. First, an estimation of the SL component is obtained with Eq. (4), which is analogous to Eq. (3) except that the modulation of the kernel is done with the measured signal. Then, subtracting this result from the measured signal gives an estimate of the corrected signal [Eq. (5)]. Performing this operation for each line provides the estimated 2D corrected image

Eq. (4)

ISLest(x,y)=dttint·xf=1:Nyf=Δy:ΔySPSTxf(x,yf)·Imes(xf,y+yf),

Eq. (5)

Icorr(x,y)=Imes(x,y  )ISLest(x,y).
The corrected image contains a second-order residual SL error because the SL kernels are modulated by the measured signal, which itself contains SL. This error may or may not be negligible, depending on the initial SL level and the desired performance. A better estimation of the corrected image can be obtained by performing one or more iterations of the correction, using the above equations while replacing Imes in Eq. (4) with Icorr. Faster convergence can be achieved by replacing Imes with Icorr in Eq. (4) at each subsequent line correction, instead of after the full 2D image is corrected. The first approach follows a Jacobi convergence, whereas the second follows a Gauss–Seidel convergence, which is typically twice as fast.

4.

SL Kernel Measurements and Processing

4.1.

Calibration Grid

SL kernels are measured over a calibration grid (xf,yf) using point-like source illumination at the corresponding field angles. The matrix SPSTxf(x,yf) is measured by illuminating the instrument at a fixed value of xf and varying yf. For each value of yf, a line of the matrix SPSTxf(x,yf) is obtained along the x direction. This process is repeated for various values of xf, effectively providing the various kernels SPSTxf.

A calibration grid with xf=[1:N] and yf=[Δy:Δy] in steps of 1 would result in an unrealistically long measurement campaign. Therefore, a restricted calibration grid is considered, and interpolation will be performed to provide the kernels over the full-resolution grid. For CLIM, SL has a smooth behavior along yf but shows rapid variations in the near-nominal region due to localized ghosts. For adequate sampling, a grid with one-pixel steps is selected in the range yf=[3535]. Beyond this range, larger steps can be used: yf=±[50,250,500,750,1000,1250,1484]. In the xf direction, SL variation is mostly a shift, as the instrument is quasi-telecentric. Therefore, a grid with 200 pixels steps is selected. Finally, the full calibration grid is shown in Fig. 3.

Fig. 3

Calibration grid for SL calibration.

OE_63_12_124101_f003.png

4.2.

Dynamic Range Decomposition

The SL kernel evolves over a large dynamic range that cannot be resolved by the detector in a single acquisition. Therefore, at each field angle, the signal is acquired at various input power levels or integration times, effectively creating a dynamic range decomposition. Acquisitions with low power or short integration times enable the measurement of the highest values of the SL, whereas the lowest signals remain within the noise. Increasing the input power or integration time saturates the highest signals but brings the lowest signals above the noise. The different acquisitions are then recombined by normalizing them to integration time and input power, then retaining for each pixel x the highest non-saturated signal. This process requires excellent accuracy in the monitoring system. Alternatively, a stitching-based recombination can be implemented, provided there is adequate overlap among levels.

Close to the nominal field angle, a high-density grid is only necessary for measuring the rapidly varying ghosts. Therefore, to reduce the campaign duration, calibrations at yf=±[530] are performed with a single dynamic level, using the lowest input power and integration time to resolve the ghosts while leaving the scattering in the noise. The noise signal is then artificially removed from the measurement for those fields.

4.3.

SPST Map Normalization

SL kernels must be normalized to the nominal signal. If the image-forming beam produces a sub-pixel nominal image at the detector, the measurements are simply normalized to the signal at pixel coordinate (xf;yf=0). If the nominal image is spread over more than one pixel, due to aberrations or the test collimator’s angular extent, the nominal signal is obtained by summing the signal across those pixels. In CLIM, we consider the nominal signal as extended over (xf±1;0±1). Finally, the signal on nominal pixels is removed from the measurement to retain only the SL. Figure 4(a) illustrates the results of these measurements and processing, simulated by ray tracing and confirming the adequate sampling of the SL variations.

Fig. 4

(a) Calibrated SPST map. (b) and (c) SPST interpolation along the yf coordinate (ALT). (c) Theoretical SPST in full resolution and (d) interpolation error. Color maps are in log10 scale, with normalization to the nominal signal, corresponding to the direct image forming a beam.

OE_63_12_124101_f004.png

4.4.

Interpolation Along yf (ALT)

Interpolation is performed to obtain the SPSTxf(x,yf) matrix with a resolution of one pixel along yf. A simple linear interpolation is applied to each individual column of the kernel. Figure 4(b) illustrates the interpolation results for two values of x, whereas Fig. 4(c) shows the complete 2D map resulting from the interpolation, with one pixel sampling in both dimensions. For comparison, Fig. 4(d) shows the interpolation error.

This method makes no a priori assumptions about the SL profile; the only requirement for its efficiency is sufficient sampling along yf. This is a significant advantage compared with the interpolation method used for Metop-3MI, which relied on a local symmetry assumption. Alternatively, fitting could be employed in smoothly scattered regions, for example, using Harvey or ABg models.2426

4.5.

Interpolation Along xf (ACT)

At this stage, the matrices SPSTxf(x,yf) are known in full resolution for each calibrated value of xf. Interpolation is then performed to derive the kernels for all values of xf between 1 and N, with steps of one pixel. A local shift invariance assumption is made, which is reasonable for a quasi-telecentric instrument such as CLIM.

For a given value xf=xfinterp, the matrix is deduced by searching for the closest neighboring calibrated fields, xf1. A shift along x of the associated matrix is performed to obtain the interpolated kernel: SPSTxfinterp(x,yf)=SPSTxf1(x(xfinterpxf1),yf). This process results in missing signals on one of the edges along x, which is filled by applying the equivalent transformation to the kernel associated with the second closest neighbor, xf2, located on the other side. This process is illustrated in Fig. 5(a). Finally, this process provides N full-resolution SL kernels, SPSTxf, each with dimension N×(2·Δy2+1).

Fig. 5

SPST interpolation along the xf coordinate: we search for the first (a) and second (b) neighboring calibration fields and apply shifts to reconstruct the interpolated SPST (c). Color maps are in a log10 scale.

OE_63_12_124101_f005.png

4.6.

SPST Map Binning

Spatial or field binning can be applied to the SL kernels to reduce the quantity of data, thereby decreasing the SL correction computation time. Spatial binning is done by lowering the SL kernel resolution along the x coordinates, by a factor sspat. Field binning in the ALT direction is done by lowering the SL kernel resolution along the yf coordinates, by a factor sALT. Field binning in the ACT direction is done by reducing the number of kernels by averaging the maps associated with neighboring fields, by groups of sACT.

Applying spatial binning implies that the estimated SL (ISL  est) is obtained at a lower resolution too. During SL correction, a simple linear interpolation restores it to full resolution before it is subtracted from the measured image, though introducing a residual error. On the other side, field binning averages the signal from neighboring fields, which is equivalent to estimating the SL for an input scene that has been spatially binned, thereby smoothing high frequencies. Therefore, binning contributes to a residual SL correction error, particularly affecting SL in transitions between bright and dark areas of a scene. For CLIM, no spatial binning is applied, but a field binning is applied with sACT=sALT=20, which corresponds to the width of the performance requirement zone. This provides the optimal data reduction while limiting the effect on the SL correction performance.

5.

SL Correction Performance

SL correction performance is assessed by applying the algorithm on the image of the reference extended scene, shown in Fig. 1. This is done with ray tracing data, considering the assumption of an absorbing detector, or the worst-case scenario of a reflective detector. In both cases, the theoretical SL is computed with Eq. (3), and the correction method follows all the steps described in the previous sections.

Figure 6(a) shows the estimated SL associated with the reference scene, obtained using the correction algorithm with enough iterations to reach convergence. Figure 6(b) displays the residual SL error, obtained by subtracting the estimated SL from the measured image. Figure 6(c) presents the profiles along x for the nominal signal, initial SL, and residual SL. Specifically, the residual SL is shown for a single iteration and at convergence, which is reached with two or more iterations. The SL reduction factor is on average 1/25 with a single iteration, whereas a factor of 1/100 is achieved at convergence. Overlaying the performance requirement on the graph shows that a single iteration suffices to reach a satisfactory residual SL level. Interestingly, there is a larger error at the transition zone, which arises from the field binning effect. With an appropriate selection of the field binning factor, this error is correctly restricted to the transition area not included in the performance requirement.

Fig. 6

Estimated SL (a), residual SL after correction (b), and profiles (c) in the assumption of an absorbing detector. (d)–(f) Analogous results for a reflective detector.

OE_63_12_124101_f006.png

Figures 6(d)6(f) show the analogous results for the case of a reflective detector. In this scenario, the initial SL level is larger due to additional ghosts. Consequently, two iterations are necessary to bring the residual SL below the performance requirement. Moreover, the convergence is reached with at least three iterations. In practice, the real properties of the detector are somewhere in between these two scenarios.

6.

ACT Out-of-Field SL

SL coming from outside the field of view in the ACT direction, ISLOOF, has been neglected because its level is much lower than ISLIF. Moreover, correcting for this contributor is challenging as the nominal signal from such angles is unknown. For Metop-3MI, out-of-field SL kernels are measured, and the in-field image is mirrored to estimate the out-of-field input radiance.27 This method works well for scenes with large uniform features; however, it cannot predict localized objects such as isolated clouds.

Following a similar approach, an algorithm correction of both ISLIF and ISLOOF could be built for CLIM. SPSTxf is calibrated for out-of-field ACT angles, considering xf from 1nOOF to N+nOOF with nOOF the limit for non-zero out-of-field SL. Here, we link the out-of-field pixel coordinate with the field angle by extrapolating the distortion curve. Because there is no nominal illumination for OOF SPST maps, the normalization is done to the monitoring then to the nominal signal of the closest in-field SPST map. Next, the image Imes* going from pixel 1nOOF to N+nOOF is obtained by an ACT mirroring of Imes. Finally, the estimated SL is obtained with Eq. (6) instead of Eq. (4)

Eq. (6)

ISLest(x,y)=dttint·xf=1nOOF:N+nOOFyf=Δy:ΔySPSTxf(x,yf)·Imes*(xf,y+yf).

7.

Conclusions

In this paper, we presented an SL correction approach for space telescopes equipped with linear detector arrays in a push-broom configuration, using the CLIM instrument as a case study. Our approach successfully reduces the SL level by two orders of magnitude, effectively fulfilling the users’ requirements. The methodology involves estimating the SL pattern associated with an input scene by modulating SL kernels corresponding to point-like source illumination from various field angles.

The SL kernels consist of two main components: ghost patterns near the nominal area and a smooth scattering effect in the long-range area. Due to their large dynamic range, measuring these kernels requires multi-level acquisitions, capturing the signal at various input powers or integration times. Initially calibrated over a specific grid, the kernels are then derived for any field angle illumination through interpolation. In the along-track direction, the method employs simple linear interpolation, making no initial assumptions about the SL properties. The primary requirement is a sufficiently dense calibration grid, particularly near the nominal area, to ensure that rapidly varying ghost features are well sampled. For the across-track directions, we apply a local shift assumption in our interpolation strategy, resulting in a comprehensive database of SL kernels. Ultimately, spatial and field binning is performed, significantly reducing both data volume and computation time.

We evaluated the algorithm’s efficacy on a checkerboard scene, which featured transitions between areas of bright and dark radiance. Two scenarios were considered: in the first, we assumed an absorbing detector, whereas in the second, we accounted for a larger SL with the assumption of a reflective detector. In the absorbing detector scenario, one iteration of the correction algorithm sufficed to meet the user’s performance requirements, and convergence was achieved in two iterations. For the reflective detector, two iterations were required to meet the performance requirements, with convergence achieved after three iterations.

Finally, we extended our algorithm to consider out-of-field SL in the across-track direction. For that, we use a mirroring technique to deduce the scene in the out-of-field area, as employed for the case of the Metop-3MI instrument. This combined approach provides an effective SL correction algorithm able to improve the SL of linear detector array instruments in push-broom configuration, considering all kinds of SL present in the instrument.

Code and Data Availability

There are no supporting data for this paper.

Acknowledgments

This research was funded under the CLIM contract with the European Space Agency.

References

1. 

S. Liang and J. Wang, Advanced Remote Sensing: Terrestrial Information Extraction and Applications, 2nd Ed.Elsevier, (2019). Google Scholar

2. 

L. G. Cook, “Three-mirror anastigmat used off-axis in aperture and field,” Proc. SPIE, 0183 207 –211 https://doi.org/10.1117/12.957416 (1979). Google Scholar

3. 

G. E. Romanova and K. D. Rodionova (Butylkina), “Design and analysis of the mirror system with off-axis field-of-view,” Proc. SPIE, 10745 1074514 https://doi.org/10.1117/12.2321288 (2018). Google Scholar

4. 

S. Grabarnik et al., “Compact multispectral and hyperspectral imagers based on a wide field of view TMA,” Proc. SPIE, 10565 1056505 https://doi.org/10.1117/12.2309101 (2017). Google Scholar

5. 

V. Cazaubiel, V. Chorvalli and C. Miesch, “The multispectral instrument of the Sentinel2 program,” Proc. SPIE, 10566 105660H https://doi.org/10.1117/12.2308278 (2017). Google Scholar

6. 

E. J. Knight and G. Kvaran, “Landsat-8 operational land imager design, characterization and performance,” Remote Sensing, 6 (11), 10286 –10305 https://doi.org/10.3390/rs61110286 (2014). Google Scholar

7. 

Y. Durand et al., “Copernicus CO2M mission for monitoring anthropogenic carbon dioxide emissions from space: payload status,” Proc. SPIE, 12264 1226405 https://doi.org/10.1117/12.2636158 (2022). Google Scholar

8. 

Y. Durand et al., “Status on the development of the Copernicus CO2M mission: monitoring anthropogenic carbon dioxide from space,” Proc. SPIE, 12729 127290V https://doi.org/10.1117/12.2684820 (2023). Google Scholar

9. 

M. Talone et al., “Stray light effects in above-water remote-sensing reflectance from hyperspectral radiometers,” Appl. Opt., 55 3966 –3977 https://doi.org/10.1364/AO.55.003966 (2016). Google Scholar

10. 

S. W. Brown et al., “Stray light and ocean-color remote sensing,” 4521 –4524 https://doi.org/10.1109/IGARSS.2003.1295567 (2003). Google Scholar

11. 

L. Clermont, C. Michel and Y. Stockman, “Stray light correction algorithm for high performance optical instruments: the case of Metop-3MI,” Remote Sens., 14 1354 https://doi.org/10.3390/rs14061354 (2022). Google Scholar

12. 

V. Kirschner, Stray Light Analysis and Minimization, European Space Agency, SOIDT, Space Optics Instrument Design & Technology, Noordwijk, The Netherlands (20172023). Google Scholar

13. 

E. Fest, “Stray Light Analysis and Control,” SPIE Press, Bellingham, WA, USA (2013). Google Scholar

14. 

R. Breault, Handbook of Optics, 1 38.1 –38.35 McGraw-Hill, New York, NY (1995). Google Scholar

15. 

L. Clermont and L. Aballea, “Stray light control and analysis for an off-axis three-mirror anastigmat telescope,” Opt. Eng., 60 (5), 055106 https://doi.org/10.1117/1.OE.60.5.055106 (2021). Google Scholar

16. 

A. Yeo et al., “Point spread function of SDO/HMI and the effects of stray light correction on the apparent properties of solar surface phenomena,” Astron. Astrophys., 561 A22 https://doi.org/10.1051/0004-6361/201322502 (2014). Google Scholar

17. 

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am., 62 55 (1972). Google Scholar

18. 

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J., 79 745 (1974). Google Scholar

19. 

P. A. Jansson, B. Bitlis and J. P. Allebach, Correcting Color Images for Stray-Light Effects by Computationally Solving an Inverse Problem via Selected-Ordinate Image (SORI) Processing, Technical Digest Optical Society of AmericaWashington DC, USA (2005). Google Scholar

20. 

B. Bitlis, P. A. Jansson and J. P. Allebach, “Parametric point spread function modeling and reduction of stray light effects in digital still cameras,” Proc. SPIE, 6498 64980V https://doi.org/10.1117/12.715101 (2007). Google Scholar

21. 

Y. Zong et al., “Simple spectral stray light correction method for array spectroradiometers,” Appl. Opt., 45 1111 –1119 https://doi.org/10.1364/AO.45.001111 (2006). Google Scholar

22. 

L. Clermont et al., “Going beyond hardware limitations with advanced stray light calibration for the Metop-3MI space instrument,” Sci. Rep., 14 19490 https://doi.org/10.21203/rs.3.rs-4477759/v1 (2024). Google Scholar

23. 

L. Clermont et al., “Stray-light calibration and correction for the MetOp-SG 3MI mission,” Proc. SPIE, 10704 1070406 https://doi.org/10.1117/12.2314208 (2018). Google Scholar

24. 

E. Fest, “Stray Light Analysis and Control,” SPIE Press, Bellingham, WA, USA (2013). https://doi.org/10.1117/3.1000980 Google Scholar

25. 

J. C. Stover, Optical Scattering: Measurement and Analysis, Third Ed.,SPIE Press, Bellingham, WA, USA (2012). https://doi.org/10.1117/3.975276 Google Scholar

26. 

J. E. Harvey, Understanding Surface Scatter: a Linear Systems Formulation, SPIE Press, Bellingham, WA, USA (2019). https://doi.org/10.1117/3.2530114 Google Scholar

27. 

L. Clermont and C. Michel, “Out-of-field stray light correction in optical instruments: the case of Metop-3MI,” J. Appl. Remote Sens., 18 (1), 016508 https://doi.org/10.1117/1.JRS.18.016508 (2024). Google Scholar

Biography

Lionel Clermont is a senior optical engineer with expertise in space instrumentation and stray light. Among his experiences, he was responsible for the stray light calibration and correction for the Earth observation instrument Metop-3MI. He is also a pioneer in the development of the time-of-flight (ToF) stray light characterization method and received awards such as the Early Career Achievement award of the International Society for Optics and Photonics (SPIE).

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Lionel Clermont, Aaron Algoedt, and Stefan Lesschaeve "Stray light correction method for telescopes with linear detector arrays in push-broom configuration: application to CO2M-CLIM," Optical Engineering 63(12), 124101 (4 December 2024). https://doi.org/10.1117/1.OE.63.12.124101
Received: 1 July 2024; Accepted: 8 November 2024; Published: 4 December 2024
Advertisement
Advertisement
Back to Top