In this work we consider the problem of developing algorithms for the automatic detection of buried threats in handheld Ground Penetrating Radar (HH-GPR) data. The development of algorithms for HH-GPR is relatively nascent compared to larger downward-looking GPR (DL-GPR) systems. A large number of buried threat detection (BTD) algorithms have been developed for DL-GPR systems. Given the similarities between DL-GPR data and HHGPR data, effective BTD algorithm designs may be similar for both modalities. In this work we explore the application of successful class of DL-GPR-based algorithms to HH-GPR data. In particular, we consider the class of algorithms that are based upon gradient-based features, such as histogram-of-oriented gradients (HOG) and edge histogram descriptors. We apply a generic gradient-based feature with a support vector machine to a large dataset of HH-GPR data with known buried threat locations. We measure the detection performance of the algorithm as we vary several important design parameters of the feature, and identify those designs that yield the best performance. The results suggest that the design of the gradient histogram (GH) feature has a substantial impact on its performance. We find that a tuned GH algorithm yields substantially-better performance, but ultimately performs similarly to the energy-based detector. This suggests that GH-based features may not be beneficial for HH-GPR data, or that further innovation will be needed to achieve benefits.
KEYWORDS: Algorithm development, General packet radio service, Detection and tracking algorithms, Sensors, Ground penetrating radar, Antennas, Feature extraction, Data modeling, Radar, Systems modeling
In this work we consider the problem of developing algorithms for the automatic detection of buried threats using handheld Ground Penetrating Radar (HH-GPR) data. The development of algorithms for HH-GPR is relatively nascent compared to algorithm development efforts for larger downward-looking GPR (DL-GPR) systems. One of the biggest bottlenecks for the development of algorithms is the relative scarcity of labeled HH-GPR data that can be used for development. Given the similarities between DL-GPR data and HH-GPR data however, we hypothesized that it may be possible to utilize DL-GPR data to support the development of algorithms for HH-GPR. In this work we assess the detection performance of a HH-GPR-based BTD algorithm as we vary the amounts and characteristics of the DL-GPR data included in the development of HH-GPR detection algorithms. The results indicate that supplementing HH-GPR data with DL-GPR does improve performance, especially when including data collected over buried threat locations.
KEYWORDS: General packet radio service, Convolutional neural networks, Detection and tracking algorithms, Neurons, Control systems, Performance modeling, Data processing, Algorithm development, Data modeling, Network architectures
The ground penetrating radar (GPR) is a remote sensing technology that has been successfully used for detecting buried explosive threats. A large body of published research has focused on developing algorithms that automatically detect buried threats using data from GPR sensors. One promising class of algorithms for this purpose is convolutional neural networks (CNNs), however CNNs suffer from overfitting due to the limited and variable nature of GPR data. One solution to this problem is to use a validation dataset during training, however this excludes valuable labeled data from training. In this work we show that two modern techniques for training CNNs – Batch Normalization and the Adam Optimizer - substantially improve CNN performance and reduce overfitting when applied jointly. We also investigate and identify useful settings for several important CNN hyperparameters: l2 regularization, Dropout, and the learning rate schedule. We find that the improved CNN (a baseline CNN, plus all of our improvements) substantially outperforms two competing conventional detection algorithms.
KEYWORDS: General packet radio service, Machine learning, Data analysis, Algorithm development, Detection and tracking algorithms, Data modeling, Ground penetrating radar, Visualization, Antennas, Feature extraction
This work focuses on the development of automatic buried threat detection (BTD) algorithms using ground penetrating radar (GPR) data. Buried threats tend to exhibit unique characteristics in GPR imagery, such as high energy hyperbolic shapes, which can be leveraged for detection. Many recent BTD algorithms are supervised, and therefore they require training with exemplars of GPR data collected over non-threat locations and threat locations, respectively. Frequently, data from non-threat GPR examples will exhibit high energy hyperbolic patterns, similar to those observed from a buried threat. Is it still useful therefore, to include such examples during algorithm training, and encourage an algorithm to label such data as a non-threat? Similarly, some true buried threat examples exhibit very little distinctive threat-like patterns. We investigate whether it is beneficial to treat such GPR data examples as mislabeled, and either (i) relabel them, or (ii) remove them from training. We study this problem using two algorithms to automatically identify mislabeled examples, if they are present, and examine the impact of removing or relabeling them for training. We conduct these experiments on a large collection of GPR data with several state-of-the-art GPR-based BTD algorithms.
In this work, we consider the development of algorithms for automated buried threat detection (BTD) using Ground Penetrating Radar (GPR) data. When viewed in GPR imagery, buried threats often exhibit hyperbolic shapes, and this characteristic shape can be leveraged for buried threat detection. Consequentially, many modern detectors initiate processing the received data by extracting visual descriptors of the GPR data (i.e., features). Ideally, these descriptors succinctly encode all decision-relevant information, such as shape, while suppressing spurious data content (e.g., random noise). Some notable examples of successful descriptors include the histogram of oriented gradient (HOG), and the edge histogram descriptor (EHD). A key difference between many descriptors is the precision with which shape information is encoded. For example, HOG encodes shape variations over both space and time (high precision); while EHD primarily encodes shape variations only over space (lower precision). In this work, we conduct experiments on a large GPR dataset that suggest EHD-like descriptors outperform HOG-like descriptors, as well as exhibiting several other practical advantages. These results suggest that higher resolution shape information (particularly shape variations over time) is not beneficial for buried threat detection. Subsequent analysis also indicates that the performance advantage of EHD is most pronounced among difficult buried threats, which also exhibit more irregular shape patterns.
A large number of algorithms have been proposed for automatic buried threat detection (BTD) in ground penetrating radar (GPR) data. Convolutional neural networks (CNNs) have recently achieved groundbreaking results on many recognition tasks. This success is due, in part, to their ability to automatically infer effective data representations (i.e., features) using training data. This capability however results in a high capacity model (i.e., many free parameters) that is difficult to train, and more prone to overfitting, than models employing hand-crafted feature designs. This drawback is pronounced when training data is relatively scarce, as is the case with GPR BTD. In this work we propose to combine the relative advantages of hand-crafted features, and CNNs, by constructing CNN architectures that closely emulate successful hand-crafted feature designs for GPR BTD. This makes it possible to apply supervised training to traditional hand-crafted features, allowing them to adapt to the unique characteristics of the GPR BTD problem. Simultaneously, this approach yields a much lower capacity CNN model that incorporates substantial prior research knowledge, making the model much easier to train. We demonstrate the feasibility and effectiveness of this approach by designing a “neural” implementation of the popular histogram of oriented gradient (HOG) feature. The resulting neural HOG (NHOG) implementation is much smaller and easier to train than standard CNN architectures, and achieves superior detection performance compared to the un-trained HOG feature. In theory, neural implementations can be developed for many existing successful GPR BTD algorithms, potentially yielding similar benefits.
KEYWORDS: Detection and tracking algorithms, Data modeling, Ground penetrating radar, Algorithm development, Data processing, Feature extraction, Visual process modeling, Unexploded object detection, Threat warning systems
A great deal of research has been focused on the development of computer algorithms for buried threat detection (BTD) in ground penetrating radar (GPR) data. Most recently proposed BTD algorithms are supervised, and therefore they employ machine learning models that infer their parameters using training data. Cross-validation (CV) is a popular method for evaluating the performance of such algorithms, in which the available data is systematically split into ܰ disjoint subsets, and an algorithm is repeatedly trained on ܰ−1 subsets and tested on the excluded subset. There are several common types of CV in BTD, which vary principally upon the spatial criterion used to partition the data: site-based, lane-based, region-based, etc. The performance metrics obtained via CV are often used to suggest the superiority of one model over others, however, most studies utilize just one type of CV, and the impact of this choice is unclear. Here we employ several types of CV to evaluate algorithms from a recent large-scale BTD study. The results indicate that the rank-order of the performance of the algorithms varies substantially depending upon which type of CV is used. For example, the rank-1 algorithm for region-based CV is the lowest ranked algorithm for site-based CV. This suggests that any algorithm results should be interpreted carefully with respect to the type of CV employed. We discuss some potential interpretations of performance, given a particular type of CV.
The ground penetrating radar (GPR) is a popular remote sensing modality for buried threat detection. In this work we focus on the development of supervised machine learning algorithms that automatically identify buried threats in GPR data. An important step in many of these algorithms is feature extraction, where statistics or other measures are computed from the raw GPR data, and then provided to the machine learning algorithms for classification. It is well known that an effective feature can lead to major performance improvements and, as a result, a variety of features have been proposed in the literature. Most of these features have been handcrafted, or designed through trial and error experimentation. Dictionary learning is a class of algorithms that attempt to automatically learn effective features directly from the data (e.g., raw GPR data), with little or no supervision. Dictionary learning methods have yielded state-of-theart performance on many problems, including image recognition, and in this work we adapt them to GPR data in order to learn effective features for buried threat classification. We employ the LC-KSVD algorithm, which is a discriminative dictionary learning approach, as opposed to a purely reconstructive one like the popular K-SVD algorithm. We use a large collection of GPR data to show that LC-KSVD outperforms two other approaches: the popular Histogram of oriented gradient (HOG) with a linear classifier, and HOG with a nonlinear classifier (the Random Forest).
The Ground Penetrating Radar (GPR) is a remote sensing modality that has been used to collect data for the task of buried threat detection. The returns of the GPR can be organized as images in which the characteristic visual patterns of threats can be leveraged for detection using visual descriptors. Recently, convolutional neural networks (CNNs) have been applied to this problem, inspired by their state-of-the-art-performance on object recognition tasks in natural images. One well known limitation of CNNs is that they require large amounts of data for training (i.e., parameter inference) to avoid overfitting (i.e., poor generalization). This presents a major challenge for target detection in GPR because of the (relatively) few labeled examples of targets and non-target GPR data. In this work we use a popular transfer learning approach for CNNs to address this problem. In this approach we train two CNN on other, much larger, datasets of grayscale imagery for different problems. Specifically, we pre-train our CNNs on (i) the popular Cifar10 dataset, and (ii) a dataset of high resolution aerial imagery for detecting solar photovoltaic arrays. We then use varying subsets of the parameters from these two pre-trained CNNs to initialize the training of our buried threat detection networks for GPR data. We conduct experiments on a large collection of GPR data and demonstrate that these approaches improve the performance of CNNs for buried target detection in GPR data
In recent years, the Ground Penetrating Radar (GPR) has successfully been applied to the problem of buried threat detection (BTD). A large body of research has focused on using computerized algorithms to automatically discriminate between buried threats and subsurface clutter in GPR data. For this purpose, the GPR data is frequently treated as an image of the subsurface, within which the reflections associated with targets often appear with a characteristic shape. In recent years, shape descriptors from the natural image processing literature have been applied to buried threat detection, and the histogram of oriented gradient (HOG) feature has achieved state-of-the-art performance. HOG consists of computing histograms of the image gradients in disjoint square regions, which we call pooling regions, across the GPR images. In this work we create a large body of potential pooling regions and use the group LASSO (GLASSO) to choose a subset of the pooling regions that are most appropriate for BTD on GPR data. We examined this approach on a large collection of GPR data using lane-based cross-validation, and the results indicate that GLASSO can select a subset of pooling regions that lead to superior performance to the original HOG feature, while simultaneously also reducing the total number of features needed. The selected pooling regions also provide insight about the regions in GPR images that are most important for discriminating threat and nonthreat data.
Ground penetrating radar (GPR) systems have emerged as a state-of-the-art remote sensing platform for the automatic detection of buried explosive threats. The GPR system that was used to collect the data considered in this work consists of an array of radar antennas mounted on the front of a vehicle. The GPR data is collected as the vehicle moves forward down a road, lane or path. The data is then processed by computerized algorithms that are designed to automatically detect the presence of buried threats. The amount of GPR data collected is typically prohibitive for real-time buried threat detection and therefore it is common practice to first apply a prescreening algorithm in order to identify a small subset of data that will then be processed by more computationally advanced algorithms. Historically, the F1V4 anomaly detector, which is energy-based, has been used as the prescreener for the GPR system considered in this work. Because F1V4 is energy-based, it largely discards shape information, however shape information has been established as an important cue for the presence of a buried threat. One recently developed prescreener, termed the HOG prescreener, employs a Histogram of Oriented Gradients (HOG) descriptor to leverage both energy and shape information for prescreening. To date, the HOG prescreener yielded inferior performance compared to F1V4, even though it leveraged the addition of shape information. In this work we propose several modifications to the original HOG prescreener and use a large collection of GPR data to demonstrate its superior detection performance compared to the original HOG prescreener, as well as to the F1V4 prescreener.
KEYWORDS: Target detection, Ground penetrating radar, Algorithm development, Remote sensing, Detection and tracking algorithms, General packet radio service, Sensors, Data processing, Detector development, Signal attenuation
Ground penetrating radar (GPR) is a popular remote sensing modality for buried threat detection. Many algorithms
have been developed to detect buried threats using GPR data. One on-going challenge with GPR is the detection of
very deeply buried targets. In this work a detection approach is proposed that improves the detection of very deeply
buried targets, and interestingly, shallow targets as well. First, it is shown that the signal of a target (the target
“signature”) is well localized in time, and well correlated with the target’s burial depth. This motivates the proposed
approach, where GPR data is split into two disjoint subsets: an early and late portion corresponding to the time at
which shallow and deep target signatures appear, respectively. Experiments are conducted on real GPR data using
the previously published histogram of oriented gradients (HOG) prescreener: a fast supervised processing method
operated on HOG features. The results show substantial improvements in detection of very deeply buried targets
(4.1% to 17.2%) and in overall detection performance (81.1% to 83.9%). Further, it is shown that the performance
of the proposed approach is relatively insensitive to the time at which the data is split. These results suggest that
other detection methods may benefit from depth-based processing as well.
KEYWORDS: Land mines, General packet radio service, Data modeling, Detection and tracking algorithms, Target detection, Data analysis, Process modeling, Ground penetrating radar, Remote sensing, Sensors
Buried threat detection algorithms in Ground Penetrating Radar (GPR) measurements often utilize a statistical classifier to model target responses. There are many different target types with distinct responses and all are buried in a wide range of conditions that distort the target signature. Robust performance of this classifier requires it to learn the distinct responses of target types while accounting for the variability due to the physics of the emplacement. In this work, a method to reduce certain sources of excess variation is presented that enables a linear classifier to learn distinct templates for each target type’s response despite the operational variability. The different target subpopulations are represented by a Gaussian Mixture Model (GMM). Training the GMM requires jointly extracting the patches around target responses as well as learning the statistical parameters as neither are known a priori. The GMM parameters and the choice of patches are determined by variational Bayesian methods. The proposed method allows for patches to be extracted from a larger data-block that only contain the target response. The patches extracted from this method improve the ROC for distinguishing targets from background clutter compared to the patches extracted using other patch extraction methods aiming to reduce the operational variability.
KEYWORDS: Land mines, General packet radio service, Principal component analysis, Data modeling, Target detection, Sensors, Expectation maximization algorithms, Detection and tracking algorithms, Distortion, Dielectrics
Ground Penetrating Radar (GPR) is a very promising technology for subsurface threat detection. A successful algorithm employing GPR should achieve high detection rates at a low false-alarm rate and do so at operationally relevant speeds. GPRs measure reflections at dielectric boundaries that occur at the interfaces between different materials. These boundaries may occur at any depth, within the sensor's range, and furthermore, the dielectric changes could be such that they induce a 180 degree phase shift in the received signal relative to the emitted GPR pulse. As a result of these time-of-arrival and phase variations, extracting robust features from target responses in GPR is not straightforward. In this work, a method to mitigate polarity and alignment variations based on an expectation-maximization (EM) principal-component analysis (PCA) approach is proposed. This work demonstrates how model-based target alignment can significantly improve detection performance. Performance is measured according to the improvement in the receiver operating characteristic (ROC) curve for classification before and after the data is properly aligned and phase-corrected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.