There is a history and understanding of exploiting moving targets within ground moving target indicator (GMTI) data, including methods for modeling performance. However, many assumptions valid for GMTI processing are invalid for synthetic aperture radar (SAR) data. For example, traditional GMTI processing assumes targets are exo-clutter and a system that uses a GMTI waveform, i.e. low bandwidth (BW) and low pulse repetition frequency (PRF). Conversely, SAR imagery is typically formed to focus data at zero Doppler and requires high BW and high PRF. Therefore, many of the techniques used in performance estimation of GMTI systems are not valid for SAR data. However, as demonstrated by papers in the recent literature,1-11 there is interest in exploiting moving targets within SAR data. The techniques employed vary widely, including filter banks to form images at multiple Dopplers, performing smear detection, and attempting to address the issue through waveform design. The above work validates the need for moving target exploitation in SAR data, but it does not represent a theory allowing for the prediction or bounding of performance. This work develops an approach to estimate and/or bound performance for moving target exploitation specific to SAR data. Synthetic SAR data is generated across a range of sensor, environment, and target parameters to test the exploitation algorithms under specific conditions. This provides a design tool allowing radar systems to be tuned for specific moving target exploitation applications. In summary, we derive a set of rules that bound the performance of specific moving target exploitation algorithms under variable operating conditions.
The polar format algorithm (PFA) is computationally faster than back projection for producing spotlight mode synthetic
aperture radar (SAR). This is very important in applications such as video SAR for persistent surveillance, as images
may need to be produced in real time. PFA's speed is largely due to making a planar wavefront assumption and forming
the image onto a regular grid of pixels lying in a plane. Unfortunately, both assumptions cause loss of focus in airborne
persistent surveillance applications. The planar wavefront assumption causes a loss of focus in the scene for pixels that
are far from scene center. The planar grid of image pixels causes loss of the depth of focus for conic flight geometries.
In this paper, we present a method to compensate for the loss of depth of focus while warping the image onto a terrain
map to produce orthorectified imagery. This technique applies a spatially variant post-filter and resampling to correct
the defocus while dewarping the image. This work builds on spatially variant post-filtering techniques previously
developed at Sandia National Laboratories in that it incorporates corrections for terrain height and circular flight paths.
This approach produces high quality SAR images many times faster than back projection.
We present a technique for aperture weighting for use in video synthetic aperture radar (SAR). In video SAR the
aperture required to achieve the desired cross range resolution typically exceeds the frame rate period. As a result, there
can be a significant overlap in the collected phase history used to form consecutive images in the video. Video SAR
algorithms seek to exploit this overlap to avoid unnecessary duplication of processing. When no aperture weighting or
windowing is used one can simply form oversampled SAR images from the non-overlapping sub-apertures using
coherent back projection (or other similar techniques). The resulting sub-aperture images may be coherently summed to
produce a full resolution image. A simple approach to windowing for sidelobe control is to weight the sub-apertures
during summation of the images. Our approach involves producing two or more weighted images for each sub-aperture
which can be linearly combined to approximate any desired aperture weighting. In this method we achieve nearly the
same sidelobe control as weighting the phase history data and forming a new image for each frame without losing the
computation savings of the sub-aperture image combining approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.