The VideoPlus®-Aware (VPA) system enables autonomous video-based target detection, tracking and classification. The system stabilizes video and operates completely autonomously. A statistical background model enables robust acquisition of moving targets, while stopped targets are tracked using feature-based detectors. An ensemble classifier is trained for automated detection and classification of dismounts (i.e., humans) and a planar scene model is used to both improve system performance and reduce false positives. A formal evaluation of the VPA system was performed by the government, to quantify the system’s abilities to detect, track, and classify, humans. The evaluation provided 811 separate data points gathered over a period of four days with an overall probability of sensing of 99.9%. The probability of detection was 86.2% and the percentage of correct action classification was 82%. The data provided a False Alarm Rate of 0 per hour and Nuisance Alarm Rate of 0.72 per hour. Dismounts were reliably classified with pixel heights as low as 25 pixels. Real-time automated detection, tracking, and classification of targets with low false positive rates was achieved, even with few pixels on target. The planar scene model based optimizations were sufficient to dramatically reduce the runtime of sliding-window classifiers.
Small xed-wing UAS (SUAS) such as Raven and Unicorn have limited power, speed, and maneuverability. Their
missions can be dramatically hindered by environmental conditions (wind, terrain), obstructions (buildings, trees)
blocking clear line of sight to a target, and/or sensor hardware limitations (xed stare, limited gimbal motion,
lack of zoom). Toyon's Sensor Guided Flight (SGF) algorithm was designed to account for SUAS hardware
shortcomings and enable long-term tracking of maneuvering targets by maintaining persistent eyes-on-target.
SGF was successfully tested in simulation with high-delity UAS, sensor, and environment models, but real-
world
ight testing with 60 Unicorn UAS revealed surprising second order challenges that were not highlighted
by the simulations. This paper describes the SGF algorithm, our rst round simulation results, our second order
discoveries from
ight testing, and subsequent improvements that were made to the algorithm.
Substantial research has addressed the problems of automatic search, routing, and sensor tasking for UAVs,
producing many good algorithms for each task. But UAV surveillance missions typically include combinations of
these tasks, so an algorithm that can manage and control UAVs through multiple tasks is desired. The algorithm
in this paper employs a cooperative graph-based search when target states are unknown. If target states become
more localized, the algorithm switches to route UAV(s) for target intercept. If a UAV is close to a target,
waypoints and sensor commands are optimized over short horizons to maintain the best sensor-to-target viewing
geometry.
Research from the Institute for Collaborative Biotechnologies (ICB) at the University of California at Santa Barbara
(UCSB) has identified swarming algorithms used by flocks of birds and schools of fish that enable these animals to move
in tight formation and cooperatively track prey with minimal estimation errors, while relying solely on local communication
between the animals. This paper describes ongoing work by UCSB, the University of Florida (UF), and the Toyon
Research Corporation on the utilization of these algorithms to dramatically improve the capabilities of small unmanned
aircraft systems (UAS) to cooperatively locate and track ground targets.
Our goal is to construct an electronic system, called GeoTrack, through which a network of hand-launched UAS
use dedicated on-board processors to perform multi-sensor data fusion. The nominal sensors employed by the system
will EO/IR video cameras on the UAS. When GMTI or other wide-area sensors are available, as in a layered sensing
architecture, data from the standoff sensors will also be fused into the GeoTrack system. The output of the system will be
position and orientation information on stationary or mobile targets in a global geo-stationary coordinate system.
The design of the GeoTrack system requires significant advances beyond the current state-of-the-art in distributed
control for a swarm of UAS to accomplish autonomous coordinated tracking; target geo-location using distributed sensor
fusion by a network of UAS, communicating over an unreliable channel; and unsupervised real-time image-plane video
tracking in low-powered computing platforms.
This paper describes our recent work combining a high-fidelity battlefield software simulaton, a suite of autonomous sensor
and navigation control algorithms for unmanned air vehicles (UAVs), and a hardware-in-the-loop control interface. The
complete system supports multiple real and simulated UAVs that search for and track multiple real and simulated targets.
Targets communicate their real-time locations to the simulator through a wireless GPS link. Data from real target(s)
is used to create target(s) in the simulation testbed that may exist alongside additional simulated targets. The navigation
and video sensors onboard the UAVs are tasked (via another wireless link) by our control algorithm suite to search for
and track targets that exist in the simulation. Video data is streamed to an image plane video tracker (IPVT), which
produces detections that can be fed to a global tracker within the control suite. Routing and gimbal control algorithms use
information from the global tracker to task the UAVs, thus completing an information feedback control loop. Additional
sensors (such as the ground moving target indicator (GMTI) radar) can exist within the simulation and generate simulated
detections to augment the tracking information obtained from the IPVT.
Our simulator is part of Toyon's Simulation of the Locations and Attack of Mobile Enemy Missiles (SLAMEM(R))
tool. SLAMEM contains detailed models for ground targets, surveillance platforms, sensors, attack aircraft, UAVs, data
exploitation, multi-source fusion, sensor retasking, and attack nomination. SLAMEM models road networks, foliage cover,
populated regions, and terrain, using the terrain elevation data (DTED).
An important problem in unmanned air vehicle (UAV) and UAV-mounted sensor control is the target search
problem: locating target(s) in minimum time. Current methods solve the optimization of UAV routing control
and sensor management independently. While this decoupled approach makes the target search problem
computationally tractable, it is suboptimal.
In this paper, we explore the target search and classification problems by formulating and solving a joint UAV
routing and sensor control optimization problem. The routing problem is solved on a graph using receding horizon
optimal control. The graph is dynamically adjusted based on the target probability distribution function (PDF).
The objective function for the routing optimization is the solution of a sensor control optimization problem. An
optimal sensor schedule (in the sense of maximizing the viewed target probability mass) is constructed for each
candidate flight path in the routing control problem.
The PDF of the target state is represented with a particle filter and an "occupancy map" for any undiscovered
targets. The tradeoff between searching for undiscovered targets and locating tracks is handled automatically
and dynamically by the use of an appropriate objective function. In particular, the objective function is based
on the expected amount of target probability mass to be viewed.
KEYWORDS: Sensors, Target detection, Surveillance, Control systems, Detection and tracking algorithms, Image information entropy, Kinematics, Radar, Information theory, Quality measurement
The use of measures from information theory to evaluate the expected utility of a set of candidate actions is a popular method for performing sensor resource management. Shannon entropy is a standard metric for information. Past researchers have shown1-5 that the discrete entropy formula can measure the quality of identification information on a target, while the continuous entropy formula can measure kinematic state information of a target. In both cases, choosing controls to minimize an objective function proportional to entropy will improve ones information about the target. However, minimizing entropy does not naturally promote detection of new targets or "wide area surveillance" (WAS). This paper outlines a way to use Shannon entropy to motivate sensors to track (partially) discovered targets and survey the search space to discover new targets simultaneously. Results from the algorithmic implementation of this method show WAS being favored when most targets in the search space are undiscovered, and tracking of discovered targets being favored when most targets are in track. The tradeoff between these two competing objectives is adjusted by the objective function automatically and dynamically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.