This work investigates the application of compressed sensing algorithms to the problem of novel view synthesis in synthetic aperture radar (SAR). We demonstrate the ability to generate new images of a SAR target from a sparse set of looks at said target, and we show that this can be used as a data augmentation technique for deep learning-based automatic target recognition (ATR). The newly synthesized views can be used both to enlarge the original, sparse training set, and in transfer learning as a source dataset for initial training of the network. The success of the approach is quantified by measuring ATR performance on the MSTAR dataset.
Attempts to use synthetic data to augment measured data for improved synthetic aperture radar (SAR) automatic target recognition (ATR) performance have been hampered by domain mismatch between datasets. Past work which leveraged synthetic data in a transfer learning framework has been successful but was primarily focused on transferring generic SAR features. Recently SAMPLE, a paired synthetic and measured dataset was introduced to the SAR community, enabling demonstration of good ATR performance using 100% synthetic data. In this work, we examine how to leverage synthetic data and measured data to boost ATR using transfer learning. The synthetic dataset corresponds to the MSTAR 15o dataset. We demonstrate that high quality synthetic data can enhance ATR performance even when substantial measured data is available, and that synthetic data can reduce measured data requirements by over 50% while maintaining classification accuracy.
Long-range imaging requires effective compensation for the wavefront distortions caused by atmospheric turbulence. These distortions can be characterized by their effect on the point spread function (PSF). Consequently, synthesizing PSFs with the appropriate turbulence properties, for a given set of optics, is critical for modeling and mitigating turbulence. Recent work on sparse and redundant dictionary methods demonstrated three-orders of magnitude reduction in computing time needed to create synthetic PSFs, compared to traditional methods based on a wave propagation approach. The central challenge in harnessing the computational benefit of a dictionary-based approach is careful choice of the dictionary, or set of dictionaries. The choice must adequately capture the range of turbulence conditions and optical parameters present in the desired application or the computational benefits will not be realized. Thus, it is critical to understand the extent to which a dictionary, trained on data with one set of parameters, can be used to synthesize PSFs that represent a different set of experimental conditions. In this work, we examine statistical tests that provide metrics for quantifying the similarity between two sets of PSFs, then we use these results to measure dictionary performance. We show that our measure of dictionary performance is a function of the turbulence conditions and the experimental optics underlying the training data used to create a dictionary. Knowledge of the functional form of the dictionary performance metric allows us to choose the ideal dictionary, or set of dictionaries, to efficiently model a given range of turbulence and optical conditions. We find that choosing dictionary training data with slightly less turbulence than the desired turbulence condition improves similarity between synthetic PSF and experimentally measured PSF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.