The class of Labeled Random Finite Set filters known as the delta-Generalized Labeled Multi-Bernoulli (dGLMB) filter represents the filtering density as a set of weighted hypotheses, with each hypothesis consisting of a set of labeled tracks, which are in turn pairs of a track label and a track kinematic density. Upon update with a batch of measurements, each hypothesis gives rise to many child hypotheses, and therefore truncation has to be performed for any practical application. Finite compute budget can lead to degeneracy that drops tracks. To mitigate, we adopt a factored filtering density through the use of a novel Merge/Split algorithm. Merging has long been established in the literature; our splitting algorithm is enabled by an efficient and effective marginalization scheme, through indexing a kinematic density by the measurement IDs (in a moving window) that have been used in its update. This allows us to determine when independence can be considered to hold approximately for a given tolerance, so that the "resolution" of tracking is adaptively chosen, from a single factor (dGLMB), to all-singleton factors (Labeled Multi-Bernoulli, LMB), and anywhere in between.
The recently developed Generalized Labeled Multi-Bernoulli (GLMB) filter, or the Vo-Vo filter, provides a “closed form” solution to the multi-target tracking problem, and has found many successful applications. However, one often hears from a general practitioner what a daunting task it is to follow all the mathematical notations in order to understand GLMB. This paper strives to describe the operations of the Vo-Vo filter through Matlab code, utilizing its object oriented features for different levels of abstractions.
The recently developed Labeled Random Finite Set (RFS) filter seems to make the problem of forming tracks across time trivial: Connect the track points with the same label, and we get a track. This paper shows different ways of forming tracks: through connecting filtered or smoothed track points, through tracing the pedigree of the last best hypothesis, and through solving the batched problem with the entire data set. It shows that the problem is nontrivial, and there are unanswered questions.
Single-sensor track stitching is a path cover problem on a graph with pairwise log likelihoods. This paper provides a theoretical justification for pursuing track association on such a graph by using a sum of pairwise log likelihoods in place of the multi-sensor log likelihood. It outlines solution strategies through clique cover, cotemporal subgraph decomposition, and super-node stitching
Our Multi-INT Data Association Tool (MIDAT) learns patterns of life (POL) of a geographical area from video analyst observations called out in textual reporting. Typical approaches to learning POLs from video make use of computer vision algorithms to extract locations in space and time of various activities. Such approaches are subject to the detection and tracking performance of the video processing algorithms. Numerous examples of human analysts monitoring live video streams annotating or “calling out” relevant entities and activities exist, such as security analysis, crime-scene forensics, news reports, and sports commentary. This user description typically corresponds with textual capture, such as chat. Although the purpose of these text products is primarily to describe events as they happen, organizations typically archive the reports for extended periods. This archive provides a basis to build POLs. Such POLs are useful for diagnosis to assess activities in an area based on historical context, and for consumers of products, who gain an understanding of historical patterns. MIDAT combines natural language processing, multi-hypothesis tracking, and Multi-INT Activity Pattern Learning and Exploitation (MAPLE) technologies in an end-to-end lab prototype that processes textual products produced by video analysts, infers POLs, and highlights anomalies relative to those POLs with links to “tracks" of related activities performed by the same entity. MIDAT technologies perform well, achieving, for example, a 90% F1-value on extracting activities from the textual reports.
KEYWORDS: Digital filtering, Filtering (signal processing), Monte Carlo methods, Time metrology, Computer simulations, Electronic filtering, Matrices, Bismuth, Error analysis, Detection and tracking algorithms
In many applications where communication delays are present, measurements with earlier time stamps can arrive
out-of-sequence, i.e., after state estimates have been obtained for the current time instant. To incorporate such
an Out-Of-Sequence Measurement (OOSM), many algorithms have been proposed in the literature to obtain or
approximate the optimal estimate that would have been obtained if the OOSM had arrived in-sequence. When
OOSM occurs repeatedly, approximate estimations as a result of incorporating one OOSM have to serve as the
basis for incorporating yet another OOSM. The question of whether the "approximation of approximation" is
well behaved, i.e., whether approximation errors accumulate in a recursive setting, has not been adequately
addressed in the literature. This paper draws attention to the stability question of recursive OOSM processing
filters, formulates the problem in a specific setting, and presents some simulation results that suggest that such
filters are indeed well-behaved. Our hope is that more research will be conducted in the future to rigorously
establish stability properties of these filters.
In this paper, a study of the particle flow filter proposed by Daum and Huang has been conducted. It is discovered
that for certain initial conditions, the desired particle flow that brings one particle from a good location in the
prior distribution to a good location in the posterior distribution with an equal value does not exist. This explains
the phenomenon of outliers experienced by Daum and Huang. Several ways of dealing with the singularity of the
gradient have been discussed, including (1) not moving the particles without a flow solution, (2) stopping the
flow entirely when it approaches the singularity, and (3) stopping for one step and starting in the next. In each
case the resulting set of particles are examined, and it is doubtful that they form a valid set of samples for the
approximation of the desired posterior distribution. In the case of the last method (stop and go), the particles
mostly concentrate on the mode of the desired distribution (but they fail to represent the whole distribution),
which may explain the "success" reported in the literature so far. An established method of moving particles,
the well known Population Monte Carlo method, is briefly presented in this paper for ease of reference.
Under a United States Army Small Business Technology Transfer (STTR) project, we have developed a MATLAB toolbox called PFLib to facilitate the exploration, learning and use of Particle Filters by a general user. This paper describes its object oriented design and programming interface. The software is available under a GNU
GPL license.
KEYWORDS: Sensors, Simulink, Monte Carlo methods, Target detection, Detection and tracking algorithms, Algorithm development, Environmental sensing, Signal to noise ratio, Nonlinear filtering, Data modeling
In applications in which even the best EKFs and MHTs may perform poorly, the single-target and multi-target Bayes nonlinear filters become potentially important. In recent years, new implementation techniques such as sequential Monte Carlo (a.k.a. particle-system) have emerged that, when hosted on ever more inexpensive,
smaller, and powerful computers, make these filters potentially computationally tractable for real-time applications. A methodology for preliminary test and evaluation (PT&E) of the relative strengths and weaknesses of these algorithms is becoming increasingly necessary. The purpose of PT&E is to (1) assess the broad strengths and weaknesses of various algorithms or algorithm types; (2) justify further algorithm development; and (3) provide guidance as to which algorithms are potentially useful for which applications. At last year's conference we described our plans for the development of a PT&E tool, MENTAT. In this paper we report on current progress. Our implementation is MATLAB-based, and harnesses the GUI-building capabilities of the well-known MATLAB package, SIMULINK.
KEYWORDS: Antennas, Space operations, Monte Carlo methods, Statistical analysis, Probability theory, Computer simulations, Error analysis, Raman spectroscopy, Complex systems, Estimation theory
As more and more nonlinear estimation techniques become available, our interest is in finding out what performance improvement, if any, they can provide for practical nonlinear problems that have been traditionally solved using linear methods. In this paper we examine the problem of estimating spacecraft position using conical scan (conscan) for NASA's Deep Space Network antennas. We show that for additive disturbances on antenna power measurement, the problem can be transformed into a linear one, and we present a general solution to this problem, with the least square solution reported in literature as a special case. We also show that for additive disturbances on antenna position, the problem is a truly nonlinear one, and we present two approximate solutions based on linearization and Unscented Transformation respectively, and one "exact" solution based on Markov Chain Monte Carlo (MCMC) method. Simulations show that, with the amount of data collected in practice, linear methods perform almost the same as MCMC methods. It is only when we artificially reduce the amount of collected data and increase the level of noise that nonlinear methods offer better accuracy than that achieved by linear methods, at the expense of more computation.
KEYWORDS: Sensors, Signal to noise ratio, Nonlinear filtering, Monte Carlo methods, Detection and tracking algorithms, Environmental sensing, Algorithm development, Data modeling, Particle filters, Target detection
Many nonlinear filtering (NLF) algorithms have been proposed in recent years for application to single- and multi-target detection and tracking. A methodology for preliminary test and evaluatin (PT&E) of these algorithms is becoming increasingly necessary. Under U.S. Army Research Office funding, Scientific Systems Co. Inc. and Lockheed Martin are developing a Multi-Environment NLF Tracking Assessment Testbed (MENTAT) to address this need. Once completed, MENTAT is to provide a "hierarchical" series of preliminary test and evaluation (PT&E) Monte Carlo simulated environments (including benchmark problems) of increasing difficulty and realism. The simplest MENTAT environment will consist of simple 2D scenarios with simple Gaussian-noise backgrounds and simple target maneuvers. The most complicated environments will involve: (1) increasingly more realistic simulated low-SNR backgrounds; (2) increasing motion and sensor nonlinearity; (3) increasingly higher state dimensionality; (4) increasing numbers of targets; and so on.
In this paper we consider the problem of autonomously improving upon a sensor management algorithm for better tracking performance. Since various Performance Metrics have been proposed and studied for monitoring a tracking system's behavior, the problem is solvable by first parameterizing a sensor management algorithm and then searching the parameter space for a (sub-)optimal solution. Genetic Algorithms (GA) are ideally suited for this optimization task. In our GA approach, the sensor management algorithm is driven by "rules" that has a "condition" part to specify track locations and uncertainties, and an "action" part to specify where the Field of Views (FoVs) of the sensors should be directed. Initial simulation studies using a Multi-Hypothesis Tracker and the Kullback-Leibler metric (as a basis for the GA fitness function) are presented. They indicate that the method proposed is feasible and promising.
In multi-hypothesis target tracking, given the time-predicted tracks, we consider the sensor management problem of directing the sensors' Field of View (FOV) in such a way that the targets detection rate is improved. Defining a (squared) distance between a sensor and a track as the (squared) Euclidean distance between the centers of their respective Gaussian distributions, weighted by the sum of the covariance matrices, the problem is formulated as the minimization of the Hausdorff distance from the set of tracks to the set of sensors. An analytical solution for the single sensor case is obtained, and is extended to the multiple sensors case. This extension is achieved by performing the following: (1) It is first proved that for an optimal solution, there exists a partition of the set of tracks into subsets, and an association of each subset with a sensor, such that each subset-sensor pair is optimal in the Hausdorff distance sense; (2) a brute force search is then conducted to check all possible subset-partitions of the tracks as well as the permutations of sensors; (3) for each subset-sensor pair, the optimal solution is obtained analytically; and (4) the configuration with the smallest Hausdorff distance is declared as the optimal solution for the given multi-target multi-sensor problem. Some well established loopless algorithms for generating set partitions and permutations are implemented to reduce the computational complexity. A simulation result demonstrating the proposed sensor management algorithm is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.