Over the past decades, there have been many approaches to synthetic aperture radar (SAR) automatic target recognition (ATR). ATR includes detection, classification, and identification of targets, scene, and context. Recently, the explosion of methods for deep learning has attracted numerous researchers to compare machine learning methods for SAR ATR. This paper reviews many approaches conducted for SAR recognition and discerns the most promising approaches. Using the Moving and Stationary Target Acquisition and Recognition (MSTAR) data set, there are comparative methods to evaluate the advances from the community. The paper reviews many of the available techniques recently published to determine the state of the art in emerging concepts.
Automatic Target Recognition (ATR) seeks to improve upon techniques from signal processing, pattern recognition (PR), and information fusion. Currently, there is interest to extend traditional ATR methods by employing Artificial Intelligence (AI) and Machine Learning (ML). In support of current opportunities, the paper discusses a methodology entitled: Systems Experimentation efficiency effectives Evaluation Networks (SEeeEN). ATR differs from PR in that ATR is a system deployment leveraging pattern recognition (PR) in a networked environment for mission decision making, while PR/ML is a statistical representation of patterns for classification. ATR analysis has long been part of the COMPrehensive Assessment of Sensor Exploitation (COMPASE) Center utilizing measures of performance (e.g., efficiency) and measures of effectiveness (e.g., robustness) for ATR evaluation. The paper highlights available multimodal data sets for Automated ML Target Recognition (AMLTR).
Novel techniques are necessary in order to improve the current state-of-the-art for Aided Target Recognition (AiTR) especially for persistent intelligence, surveillance, and reconnaissance (ISR). A fundamental assumption that current AiTR systems make is that operating conditions remain semi-consistent between the training samples and the testing samples. Today’s electro-optical AiTR systems are still not robust to common occurrences such as changes in lighting conditions. In this work, we explore the effect of systemic variation in lighting conditions on vehicle recognition performance. In addition, we explore the use of low-dimensional nonlinear representations of high-dimensional data derived from electro-optical synthetic vehicle images using Manifold Learning - specifically Diffusion Maps on recognition. Diffusion maps have been shown to be a valuable tool for extraction of the inherent underlying structure in high-dimensional data.
Interest in the use of active electro-optical(EO) sensors for non-cooperative target identification has steadily increased as the quality and availability of EO sources and detectors have improved. A unique and recent innovation has been the development of an airborne synthetic aperture imaging capability at optical wavelengths. To effectively exploit this new data source for target identification, one must develop an understanding of target-sensor phenomenology at those wavelengths. Current high-frequency, asymptotic EM predictors are computationally intractable for such conditions, as their ray density is inversely proportional to wavelength. As a more efficient alternative, we have developed a geometric optics based simulation for synthetic aperture ladar that seeks to model the second order statistics of the diffuse scattering commonly found at those wavelengths but with much lesser ray density. Code has been developed, ported to high-performance computing environments, and tested on a variety of target models.
A pixel-level Generalized Likelihood Ratio Test (GLRT) statistic for hyperspectral change detection is developed to mitigate false change caused by image parallax. Change detection, in general, represents the difficult problem of discriminating significant changes opposed to insignificant changes caused by radiometric calibration, image registration issues, and varying view geometries. We assume that the images have been registered, and each pixel pair provides a measurement from the same spatial region in the scene. Although advanced image registration methods exist that can reduce mis-registration to subpixel levels; residual spatial mis-registration can still be incorrectly detected as significant changes. Similarly, changes in sensor viewing geometry can lead to parallax error in an urban cluttered scene where height structures, such as buildings, appear to move. Our algorithm looks to the inherent relationship between the image views and the theory of stereo vision to perform parallax mitigation leading to a search result in the assumed parallax direction. Mitigation of the parallax-induced false alarms is demonstrated using hyperspectral data in the experimental analysis. The algorithm is examined and compared to the existing chronochrome anomalous change detection algorithm to assess performance.
KEYWORDS: 3D modeling, Solid modeling, 3D acquisition, Scattering, Computer aided design, Data modeling, Detection and tracking algorithms, Radar, Feature extraction, 3D metrology
In this paper we present an algorithm for target validation using 3-D scattering features. Building a high fidelity
3-D CAD model is a key step in the target validation process. 3-D scattering features were introduced previously [1] to
capture the spatial and angular scattering properties of a target. The 3-D scattering feature set for a target is obtained by
using the 3-D scattering centers predicted from the shooting and bouncing ray technique, and establishing a
correspondence between the scattering centers and their associated angular visibility. A 3-D scattering feature can be
interpreted to be a matched filter for a target, since the radar data projected onto the feature are matched to the spatial
and angular scattering behavior of the target. Furthermore, the 3-D scattering features can be tied back to the target
geometries using the trace-back information computed during the extraction process. By projecting the measured radar
data onto a set of 3-D scattering features and examining the associated correlations and trace-back information, the
quality of the 3-D target CAD model used for synthetic signature modeling can be quantified. The correlation and traceback
information can point to regions of a target that differ from the 3-D CAD model. Results for the canonical Slicy
target using the algorithm are presented.
The electromagnetic scattered field from an electrically large target can often be well modeled as if it is emanating
from a discrete set of scattering centers (see Fig. 1). In the scattering center extraction tool we developed previously based on the shooting and bouncing ray technique, no correspondence is maintained amongst the 3D scattering center extracted at adjacent
angles. In this paper we present a multi-dimensional clustering algorithm to track the angular and spatial behaviors of 3D
scattering centers and group them into features. The extracted features for the Slicy and backhoe targets are presented. We also
describe two metrics for measuring the angular persistence and spatial mobility of the 3D scattering centers that make up these
features in order to gather insights into target physics and feature stability. We find that features that are most persistent are also
the most mobile and discuss implications for optimal SAR imaging.
KEYWORDS: Synthetic aperture radar, Scattering, 3D image processing, Sensors, Target recognition, Radar, 3D acquisition, 3D modeling, Image restoration, Monte Carlo methods
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optical sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. Prior work that was reported at this conference has developed the theory of SAR invariants based on the radar scattering center concept and provided several examples of invariant configurations of SAR scatterers from measured and synthetic SAR image data. This paper will show that invariant scattering configurations can be extracted from predicted 3D data scatterer data and used to predict invariant features in measured SAR image data.
KEYWORDS: Scattering, 3D modeling, Radar, 3D acquisition, 3D image processing, Automatic target recognition, 3D image reconstruction, Reflectors, Sensors, Databases
Automatic Target Recognition (ATR) is difficult in general, but especially with RADAR. However, the problem can be greatly simplified by using the 3-D reconstruction techniques presented at SPIE[Stuff] the previous 2 years. Now, instead of matching seemingly random signals in 1-D or 2-D, one must match scattering centers in 3-D. This method tracks scattering centers through an image collection sequence that would typically be used for SAR image formation. A major difference is that this approach naturally allows object motion (in fact the more the object moves, the better) and the resulting 'image' is a 3-D set of scattering centers scattering centers directly from synthetic data to build a database in anticipation of comparing the relative separability of these reconstructed scattering centers against more traditional approaches for doing ATR.
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optical sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. Prior work that was reported at this conference has developed the theory of SAR invariants based on the radar scattering center concept. This paper will give several examples of invariant configurations of SAR scatterers from measured SAR image data.
In this paper we evaluate the ability of the Matched Subspace Detector (MSD), Matched Filter Detector (MFD) and Orthogonal Subspace Projection (OSP) to discriminate material types in laboratory samples of intimately mixed bidirectional reflectance data. The analysis consists of a series of experiments where bidirectional reflectance spectra of intimate mixtures of enstatite-olivine and anorthite-olivine in various proportions are converted to single scattering albedo (SSA) using Hapke's model for bidirectional reflectance. The linearized SSA spectra are used as inputs to the various detectors and the output for each is evaluated as a function of the proportion of target- to-interference. Results are presented as a series of figures that show overall the MSD has a higher target-to- background separation (i.e., better class separation) than either the MFD or OSP. This target-to-background separation results in fewer false alarms for the MSD than either of the other two detectors.
This paper presents a linear system approximation for automated analysis of passive, long-wave infrared (LWIR) imagery. The approach is based on the premise that for a time varying ambient temperature field, the ratio of object surface temperature to ambient temperature is independent of amplitude and is a function only of frequency. Thus, for any given material, it is possible to compute a complex transfer function in the frequency domain with real and imaginary parts that are indicative of the material type. Transfer functions for a finite set of ordered points on a hypothesized object create an invariant set for that object. This set of variates is then concatenated with another set of variates (obtained either from the same object or a different object) to form two random complex vectors. Statistical tests of affine independence between the two random vectors is facilitated by decomposing the generalized correlation matrix into canonical form and testing the hypothesis that the sample canonical correlations are all zero for a fixed probability of false alarm (PFA). In the case of joint Gaussian distributions, the statistical test is a maximum likelihood. Results are presented using real images.
The recent public release of high resolution Synthetic Aperture Radar (SAR) data collected by the DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program has provided a unique opportunity to promote and assess progress in SAR ATR algorithm development. This paper will suggest general principles to follow and report on a specific ATR performance experiment using these principles and this data. The principles and experiments are motivated by AFRL experience with the evaluation of the MSTAR ATR.
Synthetic Aperture Radar (SAR) sensors have many advantages over electro-optic sensors (EO) for target recognition applications, such as range-independent resolution and superior poor weather performance. However, the relative unavailability of SAR data to the basic research community has retarded analysis of the fundamental invariant properties of SAR sensors relative to the extensive invariant literature for EO, and in particular photographic sensors. This paper develops the basic geometric imaging transformation associated with SAR from first principles, and then gives an existence proof for several geometric scatter configurations which give rise to SAR image invariants.
In this paper we address the problem of detecting targets in hyperspectral images when the target signature is buried in random noise and interference (from other materials in the same pixel). We assume that the hyperspectral pixel measurement is a linear combination of the target and interference signatures observed in additive noise. The linear mixing assumption leads to a linear vector space interpretation of the measurement vector, which can be decomposed into a noise-only subspace and a target-plus- interference subspace. While it is true that the target and interference subspaces are orthogonal to the noise-only subspace, the target subspace and interference subspace are, in general, not orthogonal. The non-orthogonality between the target and interference subspaces results in leakage of interference signals into the output of matched filters resulting in false detections (i.e., higher false alarm rates). In this paper, we replace the Matched Filer Detector (MFD), which is based on orthogonal projections, with a Matched Subspace Detector (MSD), which is built on non- orthogonal or oblique projections. The advantage of oblique projections is that they eliminate the leakage of interference signals into the detector, thereby making detectors based on oblique projections invariant to the amount of interference. Furthermore, under Gaussian assumptions for the additive noise, it has been shown that the MSD is Uniformly Most Powerful (higher probability of detect for a fixed probability of false alarm) among all detectors that share this invariance to interference power. In this paper we evaluate the ability of two versions of the MSD to detect targets in HYDICE data collected over sites A and B located at the U.S. Army Yuma proving grounds. We compute data derived receiver operating characteristics (ROC) curves and show that the MSD out- performs the MFD.
Research on the formulation of invariant features for model-based object recognition has mostly been concerned with geometric constructs either of the object or in the imaging process. We describe a new method that identifies invariant features computed from long wave infrared (LWIR) imagery. These features are called thermophysical invariants and depend primarily on the material composition of the object. Features are defined that are functions of only the thermophysical properties of the imaged materials. A physics-based model is derived from the principle of conservation of energy applied at the surface of the imaged regions. A linear form of the model is used to derive features that remain constant despite changes in scene parameters/driving conditions. Simulated and real imagery, as well as ground truth thermo-couple measurements were used to test the behavior of such features. A method of change detection in outdoor scenes is investigated. The invariants are used to detect when a hypothesized material no longer exists at a given location. For example, one can detect when a patch of clay/gravel has been replaced with concrete at a given site. This formulation yields promising results, but it can produce large values outside a normally small range. Therefore, we adopt a new feature classification algorithm based on the theories of symmetric alpha- stable (S(alpha) S) distributions. We show that symmetric, alpha-stable distributions model the thermophysical invariant data much better than the Gaussian model and suggest a classifier with superior performance.
Research on the formulation of invariant features for model-based object recognition has mostly been concerned with geometric constructs either of the object or in the imaging process. We describe a new method that identifies invariant features computed from long wave infrared imagery. These features are called thermophysical invariants and depend primarily on the material composition of the object. We use this approach for identifying objects or changes in scenes viewed by downward looking infrared images. Features are defined that are functions of only the thermophysical properties of the imaged materials. A physics-based model is derived from the principle of conservation of energy applied at the surface of the imaged regions. A linear form of the model is used to derive features that remain constant despite changes in scene parameters/driving conditions. Simulated and real imagery, as well as ground truth thermo-couple measurements were used to test the behavior of such features. A method of change detection in outdoor scenes is investigated. The invariants are used to detect when a hypothesized material no longer exists at a given location. For example, one can detect when a patch of clay/gravel has been replaced with concrete at a given site.
Recognition of targets in flir imagery has been a goal of military weapon systems since the initial development of flir sensors. Reliable systems to automatically recognize targets in flir imagery have thus far eluded the combined efforts of the DOD services. Historical approaches have concentrated on adaptation of pattern recognition techniques from visible imagery (TV) target recognition. Recent research has suggested that consideration of target characteristics unique to IR imaging such as self emission due to thermal mass may lead to improved recognition performance. In order to effectively utilize these characteristics, predictive models are needed to establish the combination of viewing conditions and target states for which the target's thermal characteristics manifest themselves. This paper will focus upon the use of signature prediction models as a component of a recognition algorithm in the context of model-based vision (MBV).
Automatic object recognizer algorithms designed to work with imaging sensor inputs require extensive testing before they should be considered robust enough for challenging applications such as military targeting. Testing automatic target recognition (ATR) algorithms in most cases has been limited to a handful of the scenario conditions of interest as represented by imagery collected with a desired imaging sensor. The question naturally arises as to how robust the performance of the ATR is for all scenario conditions of interest, not just for a small set of collected imagery. A way of addressing algorithm robustness is to characterize the input imagery in terms of some common information content or quality measures that can be correlated with ATR performance. This paper addresses the utility of image characterization measures in terms of estimating ATR detection performance by correlation analyses between nine different image measures and the detection responses of two ATR algorithms. Results show that an image measure called target-to-background entropy difference is the best single measure for estimating ATR detection performance, with correlation coefficients as large as 0.60.
Image processing to accomplish automatic recognition of military vehicles has promised increased weapons systems effectiveness and reduced timelines for a number of Depariment of Defense missions. Automatic Target Recognizers (ATh) are often claimed to be able to recognize many different types of vehicles as possible targets in military targeting applications. The targeting scenario conditions include different vehicle poses and histories as well as a variety of imaging geometries, intervening atmospheres, and background environments. Testing these ATh subsystems in most cases has been limited to a handful of the scenario conditions of interest, as is represented by imagery collected with the desired imaging sensor. The question naturally arises as to how robust the performance of the ATh is for all scenario conditions of interest, not just for the set of imagery upon which an algorithm was trained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.