Simulation of moving vehicle tracking has been demonstrated using hyperspectral and polarimetric imagery (HSI/PI).
Synthetic HSI/PI image cubes of an urban scene containing moving vehicle content were generated using the Rochester
Institute of Technology's Digital Imaging and Remote Sensing Image Generation (DIRSIG) Megascene #1 model.
Video streams of sensor-reaching radiance frames collected from a virtual orbiting aerial platform's imaging sensor were
used to test adaptive sensor designs in a target tracking application. A hybrid division-of-focal-plane imaging sensor
boasting an array of 2×2 superpixels containing both micromirrors and micropolarizers was designed for co-registered
HSI/PI aerial remote sensing. Pixel-sized aluminum wire-grid linear polarizers were designed and simulated to measure
transmittance, extinction ratio, and diattenuation responses in the presence of an electric field. Wire-grid spacings of 500
[nm] and 80 [nm] were designed for lithographic deposition and etching processes. Both micromirror-relayed
panchromatic imagery and micropolarizer-collected PI were orthorectified and then processed by Numerica
Corporation's feature-aided target tracker to perform multimodal adaptive performance-driven sensing of moving
vehicle targets. Hyperspectral responses of selected target pixels were measured using micromirror-commanded slits to
bolster track performance. Unified end-to-end track performance case studies were completed using both panchromatic
and degree of linear polarization sensor modes.
KEYWORDS: Sensors, Motion models, Kinematics, Target detection, Data modeling, Remote sensing, Micromirrors, Spectroscopy, Monte Carlo methods, Imaging systems
An architecture and implementation is presented regarding persistent, hyperspectral, adaptive, multi-modal,
feature-aided tracking within the urban context. A novel remote-sensing imager has been designed which employs
a micro-mirror array at the focal plane for per-pixel adaptation. A suite of end-to-end synthetic experiments have
been conducted, which include high-fidelity moving-target urban vignettes, DIRSIG hyperspectral rendering, and
full image-chain treatment of the prototype adaptive sensor. Corresponding algorithm development has focused
on: motion segmentation, spectral feature modeling, classification, fused kinematic/spectral association, and
adaptive sensor feedback/control.
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks
moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to
reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging
the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example
sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical
and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity,
hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven
algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track
moving vehicle targets.
Target tracking in an urban environment presents a wealth of ambiguous tracking scenarios that cause a kinematic-only tracker to fail. Partial or full occlusions in areas of tall buildings are particularly problematic as there is often no way to correctly identify the target with only kinematic information. Feature aided tracking attempts to resolve problems with a kinematic-only tracker by extracting features from the data. In the case of panchromatic video, the features are often histograms, the same is true for color video data. In the case where tracks are uniquely different colors, more typical feature aided trackers may perform well. However, a typical urban setting has similar size, shape, and color tracks, and
more typical feature aided trackers have no hopes in resolving many of the ambiguities we face. We present a novel feature aided tracking algorithm combining two-sensor modes: panchromatic video data and hyperspectral imagery. The hyperspectral data is used to provide a unique fingerprint for each target of interest where that fingerprint is the set of features used in our feature aided tracker. Results indicate an impressive 19% gain in correct track ID with our
hyperspectral feature aided tracker compared to the baseline performance with a kinematic-only tracker.
Tracking performance is a function of data quality, tracker type, and target maneuverability. Many contemporary tracking methods are useful for various operating conditions. To determine nonlinear tracking performance independent of the scenario, we wish to explore metrics that highlight the tracker capability. With the emerging relative track metrics, as opposed to root-mean-square error (RMS) calculations, we explore the Averaged Normalized Estimation Error Squared (ANESS) and Non Credibility Index (NCI) to determine tracker quality independent of the data. This paper demonstrates the usefulness of relative metrics to determine a model mismatch, or more specifically a bias in the model, using the probabilistic data association filter, the unscented Kalman filter, and the particle filter.
A major challenge for ATR evaluation is developing an accurate image truth that can be compared to an ATR algorithm's decisions to assess performance. We have developed a semi-automated video truthing application, called START, that greatly improves the productivity of an operator truthing video sequences. The user, after previewing the video selects a set of salient frames (called "keyframes"), each corresponding to significant events in the video. These keyframes are then manually truthed. We provide a spectrum of truthing tools that generates truth for additional frames from the keyframes. These tools include: fully-automatic feature tracking, interpolation, and completely manual methods. The application uses a set of diagnostic measures to manage the user's attention, flagging portions in the video for which the computed truth needs review. This changes the role of the operator from raw data entry, to that of expert appraiser supervising the quality of the image truth. We have implemented a number of graphical displays summarizing the video truthing at various timescales. Additionally, we can view the track information, showing only the lifespan information of the entities involved. A combination of these displays allows the user to manage their resources more effectively. Two studies have been conducted that have shown the utility of START: one focusing on the accuracy of the automated truthing process, and the other focusing on usability issues of the application by a set of expert users.
Having relevant sensor data available during the early phases of ATR algorithm development and evaluation projects is paramount. The source of this data primarily comes from either being synthetically-generated or from measured collections. These collections, in turn, can either be highly-controlled or operational-like exercises. This paper presents a broad overview on the types of data being housed within the Automatic Target Recognition Division of the Air Force Research Laboratory (AFRL/SNA) and that are available to the ATR developer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.