Detection and tracking of dim targets in heavy clutter environments is a daunting theoretical and practical problem.
Application of the recently developed Background Agnostic Cardinalized Probability Hypothesis Density (BA-CPHD)
filter provides a very promising approach that adequately addresses all the complexities and the nonlinear nature of this
problem. In this paper, we present analysis, derivation, development, and application of a BA-CPHD implementation for
tracking dim ballistic targets in environments with a range of unknown clutter rates, unknown clutter distribution, and
unknown target probability of detection. The effectiveness and accuracy of the implemented algorithms are assessed and
evaluated. Results that evaluate and also demonstrate the specific merits of the proposed approach are presented.
In several previous publications the first author has proposed a "generalized likelihood function" (GLF) approach
for processing nontraditional measurements such as attributes, features, natural-language statements, and inference
rules. The GLF approach is based on random set "generalized measurement models" for nontraditional
measurements. GLFs are not conventional likelihood functions, since they are not density functions and their
integrals are usually infinite, rather than equal to 1. For this reason, it has been unclear whether or not the
GLF approach is fully rigorous from a strict Bayesian point of view. In a recent paper, the first author demonstrated
that the GLF of a specific type of nontraditional measurement-quantized measurements-is rigorously
Bayesian. In this paper we show that this result can be generalized to arbitrary nontraditional measurements,
thus removing any doubt that the GLF approach is rigorously Bayesian.
Most multitarget tracking algorithms, such as JPDA, MHT, and the PHD and CPHD filters, presume the following
measurement model: (a) targets are point targets, (b) every target generates at most a single measurement,
and (c) any measurement is generated by at most a single target. However, the most familiar sensors, such as
surveillance and imaging radars, violate assumption (c). This is because they are actually superpositional-that
is, any measurement is a sum of signals generated by all of the targets in the scene. At this conference in 2009, the
first author derived exact formulas for PHD and CPHD filters that presume general superpositional measurement
models. Unfortunately, these formulas are computationally intractable. In this paper, we modify and generalize
a Gaussian approximation technique due to Thouin, Nannuru, and Coates to derive a computationally tractable
superpositional-CPHD filter. Implementation requires sequential Monte Carlo (particle filter) techniques.
In this paper, we introduce a decentralized fusion and tracking based on a distributed multi-source multitarget
filtering and robust communication with the following features: (i) data reduction; (ii) a disruption tolerant dissemination
procedure that takes advantage of storage and mobility; and (iii) efficient data set reconciliation algorithms.
We developed and implemented complex high-fidelity marine application demonstration of this approach that encompasses
all relevant environmental parameters. In the simulated example, multi-source information is fused by
exploiting sensors from disparate Unmanned Underwater Vehicles (UUV) and Unmanned Surface Vehicle (USV)
multi-sensor platforms. Communications among the platforms are continuously establishing and breaking depending
on the time-changing geometry. We compare and evaluate the developed algorithms by assessing their performance
against different scenarios.
In this paper we present collision event modeling, detection, and tracking using a space-based Low Earth
Orbit (LEO) EO/IR constellation of platforms. The implemented testbed is based on our previous work on
dispersed and disparate sensor management for tracking Space Objects (SOs). The known SOs' LEO trajectory
parameters are tracked by using a first order state perturbation model, and the estimates are updated using Monte Carlo
sampling techniques. Using multi-hypothesis testing we estimate if the tracked RSO is on a collision trajectory with
a satellite. Trajectories that can lead to a collision are then constantly observed and tracked using observations from
EO/IR sensors located on LEO platforms. The developed algorithms are tested and evaluated on a simulated testbed.
Open problems and future work are discussed.
In this paper we present methods for multimodel filtering of space object states based on the theory of finite state
time nonhomogeneous cadlag Markov processes and the filtering of partially observable space object trajectories.
The state and observation equations of space objects are nonlinear and therefore it is hard to estimate the conditional
probability density of the space object trajectory states given EO/IR, radar or other nonlinear observations. Moreover,
space object trajectories can suddenly change due to abrupt changes in the parameters affecting a perturbing force or
due to unaccounted forces. Such trajectory changes can lead to the loss of existing tracks and may cause collisions
with vital operating space objects such as weather or communication satellites. The presented estimation methods will
aid in preventing the occurrence of such collisions and provide warnings for collision avoidance.
Multitarget detection and tracking algorithms typically presume that sensors are spatially registered-i.e., that
all sensor states are precisely specified with respect to some common coordinate system. In actuality, sensor
observations may be contaminated by unknown spatial misregistration biases. This paper demonstrates that
these biases can be estimated by exploiting the data collected from a sufficiently large number of unknown
targets, in a unified methodology in which sensor registration and multitarget tracking are performed jointly
in a fully unified fashion. We show how to (1) model single-sensor bias, (2) integrate the biased sensors into a
single probabilistic multiplatform-multisensor-multitarget system, (3) construct the optimal solution to the joint
registration/tracking problem, and (4) devise a principled computational approximation of this optimal solution.
The approach does not presume the availability of GPS or other inertial information.
This paper generalizes the PHD filter to the case of target-dependent clutter. It is assumed that a distinct a
priori Poisson clutter process is associated with each target. Multitarget calculus techniques are used to derive
formulas for the measurement-update step. These formulas require combinatorial sums over all partitions of the
current measurement-set. Further research is required to address the resulting computational issues.
The conventional PHD and CPHD filters presume that the probability pD(x) that a measurement will
be collected from a target with state-vector x (the state-dependent probability of detection) is known a priori.
However, in many applications this presumption is false. A few methods have been devised for estimating the
probability of detection, but they typically presume that pD(x) is constant in both time and the region of interest.
This paper introduces CPHD/PHD filters that are capable of multitarget track-before-detect operation even when
probability of detection is not known and, moreover, when it is not necessarily constant, either temporally or
spatially. Furthermore, these filters are potentially computationally tractable. We begin by deriving CPHD/PHD
filter equations for the case when probability of detection is unknown but the clutter model is known a priori.
Then, building on the results of a companion paper, we note that CPHD/PHD filters can be derived for the case
when neither probability of detection or the background clutter are known.
Dynamic sensor management of heterogeneous and distributed sensors presents a daunting theoretical and practical
challenge. We present a Situational Awareness Sensor Management (SA-SM) algorithm for the tracking of ground
targets moving on a road map. It is based on the previously developed information-theoretic Posterior Expected
Number of Targets of Interest (PENTI) objective function, and utilizes combined measurements form an airborne
GMTI radar, and a space-based EO/IR sensor. The resulting filtering methods and techniques are tested and evaluated.
Different scan rates for the GMTI radar and the EO/IR sensor are evaluated and compared.
The detection and tracking of collision events involving existing Low Earth Orbit (LEO) Resident Space Objects
(RSOs) is becoming increasingly important with the higher LEO space objects traffic volume which is anticipated to
increase even further in the near future. Changes in velocity that can lead to a collision are hard to detect early on time,
and before the collision happens. Several collision events can happen at the same time and continuous monitoring
of the LEO orbit is necessary in order to determine and implement collision avoidance strategies. We present a
simulation of a constellation system consisting of multiple platforms carrying EO/IR sensors for the detection of such
collisions. The presented simulation encompasses the full complexity of LEO trajectories changes which can collide
with currently operating satellites. Efficient multitarget filter with information-theoretic multisensor management is
implemented and evaluated on different constellations.
In a previous conference paper the first author addressed the problem of devising CPHD and PHD filters
that are capable of multitarget detection and tracking in unknown, dynamically changing clutter. That paper
assumed that the clutter process is Poisson with an intensity function that is a finite mixture with unknown
parameters. The measurement-update equations for these CPHD/PHD filters involved combinatorial sums over
all partitions of the current measurement-set. This paper describes an approach that avoids combinatorial sums
and is therefore potentially computationally tractable. Clutter is assumed to be a binomial i.i.d. cluster process
with unknown parameters. Given this, three different and successively more tractable CPHD/PHD filters are
derived, all capable of multitarget track-before-detect capability. The first assumes that the entire intensity
function of the clutter process is unknown. The second and third assume that the clutter spatial distribution is
known but that the clutter rate (number of clutter returns per scan) is unknown.
KEYWORDS: Sensors, Probability theory, Target detection, Process modeling, Space sensors, Signal processing, Signal detection, Data processing, Current controlled current source, Electronic filtering
In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget
sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with
maximization of a "natural" sensor management objective function called PENT (posterior expected number of
targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.
We further develop our previous work on sensor management of disparate and dispersed sensors for tracking
geosynchronous satellites presented last year at this conference by extending the approach to a network of Space Based
Visible (SBV) type sensors on board LEO platforms. We demonstrate novel multisensor-multiobject algorithms which
account for complex space conditions such as the phase angles and Earth occlusions. Phase angles are determined by
the relative orientation of the sun, the SBV sensor, and the object, and play an important factor in determining the
probability of detection for the objects. To optimally and simultaneously track multiple geosynchronous satellites, our
tracking algorithms are based on the Probability Hypothesis Density (PHD) approximation of multiobject densities,
its regularized particle filter implementations (regularized PHD-PF), and a sensor management objective function, the
Posterior Expected Number of Objects.
Optimal sensor management of dispersed and disparate sensors for tracking Low Earth Orbit (LEO) objects
presents a daunting theoretical and practical challenge since it requires the optimal utilization of different types of
sensors and platforms that include Ground Based Radars (GBRs) positioned throughout the globe, and the Space
Based Visible (SBV) sensor on board LEO platforms. We derive and demonstrate new computationally efficient algorithms
for multisensor-multiobject tracking of LEO objects. The algorithms are based on the Posterior Expected
Number of Objects as the sensor management objective function, observation models for the sensors/platforms, and
the Probability Hypothesis Density Particle Filter (PHD-PF) tracker.
Constellations of EO/IR space based sensors can be extremely valuable for space situational awareness. In this
paper, we present trade-off analysis and comparisons of different Low Earth Orbit (LEO) EO/IR sensor platform
constellations for space situational awareness tasks. These tasks include early observation of changing events, and
localization and tracking of changing LEO orbits. We derive methods and metrics for evaluation, testing, and comparisons
of different sensor constellations based on realistic models and computationally efficient methods for simulating
realistic scenarios.
We derive new algorithms for Low Earth Orbit (LEO) event estimation based on joint search and sensor management
of space based EO/IR sensors. Our approach is based on particle representation of hypothesized probability
densities and the Posterior Expected Number of Objects of Interest sensor management objective function. We address
scientific and practical challenges of this LEO estimation problem in the context of space situational awareness. These
challenges include estimating changes in satellites trajectories, estimating current trajectories (localization), and estimating
future collisions with other LEO space objects. Simulations and the results obtained using actual LEO satellites
are presented.
Dynamic sensor management of dispersed and disparate sensors for space situational awareness presents daunting
scientific and practical challenges as it requires optimal and accurate maintenance of all Resident Space Objects
(RSOs) of interest. We demonstrate an approach to the space-based sensor management problem by extending a
previously developed and tested sensor management objective function, the Posterior Expected Number of Targets
(PENT), to disparate and dispersed sensors. This PENT extension together with observation models for various sensor
platforms, and a Probability Hypothesis Density Particle Filter (PHD-PF) tracker provide a powerful tool for tackling
this challenging problem. We demonstrate the approach using simulations for tracking RSOs by a Space Based Visible
(SBV) sensor and ground based radars.
Joint search and sensor management for space situational awareness presents daunting scientific and practical
challenges as it requires a simultaneous search for new, and the catalog update of the current space objects. We
demonstrate a new approach to joint search and sensor management by utilizing the Posterior Expected Number of
Targets (PENT) as the objective function, an observation model for a space-based EO/IR sensor, and a Probability
Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geosynchronous Satellites
are presented.
Sensor management for space situational awareness presents a daunting theoretical and practical challenge as
it requires the use of multiple types of sensors on a variety of platforms to ensure that the space environment is
continuously monitored. We demonstrate a new approach utilizing the Posterior Expected Number of Targets (PENT)
as the sensor management objective function, an observation model for a space-based EO/IR sensor platform, and a
Probability Hypothesis Density Particle Filter (PHD-PF) tracker. Simulation and results using actual Geostationary
Satellites are presented. We also demonstrate enhanced performance by applying the ProgressiveWeighting Correction
(PWC) method for regularization in the implementation of the PHD-PF tracker.
A theoretical formulation for mission based sensor management and
information fusion using advanced tools of probability theory and stochastic
processes is presented.
We apply Bayes' Belief Network methods to fuse features and determine
a tactical significant function which is used by the sensor management objective
function. The estimated multi-sensor multi-target posterior that results
reflects tactical significant, and is used to determine the course of action
for the given mission. We demonstrate the performance of the algorithm using the simple mission of
reaching a pre-specified location while avoiding threatening targets, and
discuss the results.
In last year's conference we demonstrated new results using a foundational, joint control-theoretic approach to situation assessment (SA) and SA sensor management that is based on a "dynamic situational significance map", the maximization of the expected number of targets of tactical interest, and approximate multitarget filters (specifically, first-order multitarget moment filters and multi-hypothesis correlator (MHC) engines). This year we report on
the following new developments and extensions: (1) a tactical significance function based on the fusion of different ambiguous attributes from several different sources; (2) a Bayes' belief network formulation for multi-target tracking and information fusion; and (3) a recursive closed form expression for the posterior expected number of targets of interests (PENTIs) for ANY number of sources. Results of testing this sensor management algorithm with
significance maps defined in terms of targets/attributes interrelationships using simplified battlefield situations demonstrate that these new advancements allow for a better SA, and a more efficient SA sensor management.
Sensor management in support of Level 1 data fusion (multisensor integration), or Level 2 data fusion (situation assessment) requires a computationally tractable multitarget filter. The theoretically optimal approach to this multi-target filtering is a suitable generalization of the recursive Bayes nonlinear filter. However, this optimal filter is intractable and computationally challenging that it must usually be approximated. We report on the approximation of a multi-target non-linear filtering for Sensor Management that is based on the particle filter implementation of Stein-Winter probability hypothesis densities (PHDs). Our main focus is on the operational utility of the implementation, and its computational efficiency and robustness for sensor management applications. We present a multitarget Particle Filter (PF) implementation of the PHD that include clustering, regularization, and computational efficiency. We present some open problems, and suggest future developments. Sensor management demonstrations using a simulated multi-target scenario are presented.
KEYWORDS: Particles, Particle filters, Detection and tracking algorithms, Electronic filtering, Nonlinear filtering, Filtering (signal processing), Monte Carlo methods, Target detection, Digital filtering, Complex systems
The particle filter is an effective technique for target tracking in the presence of nonlinear system model, nonlinear measurement model or non-Gaussian noise in the system and/or measurement processes. In this paper, we compare three particle filtering algorithms on a spawning ballistic target tracking scenario. One of the algorithms, the tagged particle filter (TPF), was recently developed by us. It uses separate sets of particles for separate tracks. However, data association to different tracks is interdependent. The other two algorithms implemented in this paper are the probability hypothesis density (PHD) algorithm and the joint multitarget probability density (JMPD). The PHD filter propagates the first order statistical moment of multitarget density using particles. While, the JMPD stacks the states of a number of targets to form a single particle that is representative of the whole system. Simulation results are presented to compare the performances of these algorithms.
Sensor management in support of situation assessment (SA) presents a daunting theoretical and practical challenge. We demonstrate new results using a foundational, joint control-theoretic approach to SA and SA sensor management that is based on three concepts: (1) a "dynamic situational significance map" that mathematically specifies the meaning of tactical significance for a given theater of interest at a given moment; (2) an intuitively meaningful and potentially computationally tractable objective function for SA, namely maximization of the expected number of targets of tactical interest; and (3) integration of these two concepts with approximate multitarget filters (specifically, first-order multitarget moment filters and multi-hypothesis correlator (MHC) engines). Under this approach, sensors will be directed to preferentially collect observations from targets of actual or potential tactical significance, according to an adaptively modified definition of tactical significance.
Result of testing this sensor management algorithm with significance maps defined in terms of target's location, speed, and heading will be presented. Testing is performed against simulated data, and different sensor management algorithms including the proposed are compared.
KEYWORDS: Sensors, Data fusion, Mathematical modeling, Data modeling, Reliability, Process modeling, Information fusion, Fuzzy logic, Detection and tracking algorithms, Motion models
The ambiguousness of human information sources and of a PRIORI human context would seem to automatically preclude the feasibility of a Bayesian approach to information fusion. We show that this is not necessarily the case, and that one can model the ambiguities associated with defining a "state" or "states of interest" of an entity. We show likewise that we can model information such as natural-language statements, and hedge against the uncertainties associated with the modeling process. Likewise a likelihood can be created that hedges against the inherent uncertainties in information generation and collection including the uncertainties created by the passage of time between information collections. As with the processing of conventional sensor information, we use the Bayes filter to produce posterior distributions from which we could extract estimates not only of the states, but also estimates of the reliability of those state-estimates. Results of testing this novel Bayes-filter information-fusion approach against simulated data are presented.
Multisensor-multitarget sensor management is viewed as a problem in nonlinear control theory. This paper applies newly developed theories for sensor management based on a Bayesian control-theoretic foundation. Finite-Set-Statistics (FISST) and the Bayes recursive filter for the entire multisensor-multitarget system are used
with information-theoretic objective functions in the development of the sensor management algorithms. The theoretical analysis indicate that some of these objective functions lead to potentially tractable sensor management algorithms when used in conjunction with
MHC (multi-hypothesis correlator)-like algorithms. We show examples of such algorithms, and present an evaluation of their performance against multisensor-multitarget scenarios. This sensor management formulation also allows for the incorporation of target preference, and experiments demonstrating the performance of sensor management with target preference will be presented.
In this paper we consider the problem of autonomously improving upon a sensor management algorithm for better tracking performance. Since various Performance Metrics have been proposed and studied for monitoring a tracking system's behavior, the problem is solvable by first parameterizing a sensor management algorithm and then searching the parameter space for a (sub-)optimal solution. Genetic Algorithms (GA) are ideally suited for this optimization task. In our GA approach, the sensor management algorithm is driven by "rules" that has a "condition" part to specify track locations and uncertainties, and an "action" part to specify where the Field of Views (FoVs) of the sensors should be directed. Initial simulation studies using a Multi-Hypothesis Tracker and the Kullback-Leibler metric (as a basis for the GA fitness function) are presented. They indicate that the method proposed is feasible and promising.
In multi-hypothesis target tracking, given the time-predicted tracks, we consider the sensor management problem of directing the sensors' Field of View (FOV) in such a way that the targets detection rate is improved. Defining a (squared) distance between a sensor and a track as the (squared) Euclidean distance between the centers of their respective Gaussian distributions, weighted by the sum of the covariance matrices, the problem is formulated as the minimization of the Hausdorff distance from the set of tracks to the set of sensors. An analytical solution for the single sensor case is obtained, and is extended to the multiple sensors case. This extension is achieved by performing the following: (1) It is first proved that for an optimal solution, there exists a partition of the set of tracks into subsets, and an association of each subset with a sensor, such that each subset-sensor pair is optimal in the Hausdorff distance sense; (2) a brute force search is then conducted to check all possible subset-partitions of the tracks as well as the permutations of sensors; (3) for each subset-sensor pair, the optimal solution is obtained analytically; and (4) the configuration with the smallest Hausdorff distance is declared as the optimal solution for the given multi-target multi-sensor problem. Some well established loopless algorithms for generating set partitions and permutations are implemented to reduce the computational complexity. A simulation result demonstrating the proposed sensor management algorithm is also presented.
For the last three years at this conference we have been describing the implementation of a unified, scientific approach to performance estimation for various aspects of data fusion: multitarget detection, tracking, and identification algorithms; sensor management algorithms; and adaptive data fusion algorithms. The proposed approach is based on finite-set statistics (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems in such a way that information can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what informative means. In this presentation, we will show how to extend our previous results to two new problems. First, that of evaluating the robustness of multisensor, multitarget algorithms. Second, that of evaluating the performance of multisource-multitarget threat assessment algorithms.
The theoretically optimal approach to multitarget detection, tracking, and identification is a suitable generalization of the recursive Bayes nonlinear filter. However, this optimal filter is so computationally challenging that it must usually be approximated. We report on a novel approximation of a multi-target non-linear filtering based on the spectral compression (SPECC) non-linear filter implementation of Stein-Winter probability hypothesis densities (PHDs). In its current implementation, SPECC is a two-dimensional, four-state, FFT-based filter that is Bayes-Closed. It replaces a log-posterior or log-likelihood with an approximate log-posterior or log-likelihood, that is a truncation of a Fourier basis. This approximation is based on the minimization of the least-squares error of the log-densities. The ultimate operational utility of our approach depends on its computational efficiency and robustness when compared with similar approaches. Another novel aspect of the proposed algorithm is the propagation of a first-order statistical moment of the multitarget system. This moment, the probability hypothesis density (PHD) is a density function on single-target state space which is uniquely defined by the following property: its integral in any region of state space is the expected number of targets in that region. It is the expected value of the point process of the random track set (i.e., the density function whose integral in any region of state space is the actual number of targets in the region). The adequacy, and the accuracy of the algorithm when applied to simulated and real scenarios involving ground targets are demonstrated.
KEYWORDS: Sensors, Molybdenum, Data fusion, Detection and tracking algorithms, Silicon, Monte Carlo methods, Solids, Metrology, Switches, Analytical research
For the last two years at this conference, we have described the implementation of a unified, scientific approach to performance measurement for data fusion algorithms based on FINITE-SET STATISTICS (FISST). FISST makes it possible to directly extend Shannon-type information metrics to multisource, multitarget problems. In previous papers we described application of information Measures of Effectiveness (MoEs) to multisource-multitarget data fusion and to non-distributed sensor management. In this follow-on paper we show how to generalize this work to DISTRIBUTED sensor management and ADAPTIVE DATA FUSION.
Last year at this conference we described initial result in the practical implementation of a unified, scientific approach to performance measurement for data fusion algorithms, The proposed approach is based on 'finite-set statistics' (FISST), a generalization of conventional statistics to multisource, multitarget problems. Finite-set statistics makes it possible to directly extend Shannon-type information metrics to multi-source, multitarget problems in such a way that 'information' can be defined and measured even though any given end-user may have conflicting or even subjective definitions of what 'informative' means. In last year's paper, we described scientific performance evaluation for Level 1 data fusion. In this follow-on paper we describe a generalization of the FISST approach to Level 4 data fusion, specifically sensor management. Our Level 4 MoEs are based on the fact that sensor management is a support function: its purpose is to redirect collection assets in order to improve the input data into- and therefore the output performance of a Level 1 fusion algorithm. Accordingly, our basic MoE is 'excess information'. By using a sensor scheduler to simulate various sensor management algorithms, we established the effectiveness and intuitiveness of two different sensor management MoEs: the multitarget Kullback-Leibler information metric, and the Hausdorff multitarget miss-distance metric.
Real-time fusion algorithms are often patchworks of loosely integrated sub-algorithms, each of which addresses a separate fusion objective and each of which may process only one kind of evidence. Because these objectives are often in conflict, adaptive methods (e.g. internal monitoring and feedback control to dynamically reconfigure algorithms) are often necessary to ensure optimal performance. This paper describes a different approach to adaptive fusion in which explicit algorithm reconfiguration is largely unnecessary because conflicting objectives are simultaneously resolved within a self-reconfiguring, optimally integrated algorithm. This approach is based on Finite-Set Statistics (FISST), a special case of random set theory that unifies many aspects of multisource-multitarget data fusion, including detection, tracking, identification, and evidence accrual. This paper describes preliminary results in applying a FISST-based filtering approach to a ground-based, single-target identification scenario based on the fusion of several types of synthetic message-based data from several sensors.
The encoding of images at high quality is important in a number of applications. We have developed an approach to coding that produces no visible degradation and that we denote as perceptually transparent. Such a technique achieves a modest compression, but still significantly higher than error free codes. Maintaining image quality is not important in the early stages of a progressive scheme, when only a reduced resolution preview is needed. In this paper, we describe a new method for the progressive transmission of high quality still images, that efficiently uses the lower resolution images in the encoding process. Analysis based interpolation is used to estimate the higher resolution image, and reduces the incremental information transmitted at each step. This methodology for high quality image compression is also aimed at obtaining a compressed image of higher perceived quality than the original.
In previous work, we reported on the benefits of noise reduction prior to coding of very high quality images. Perceptual transparency can be achieved with a significant improvement in compression as compared to error free codes. In this paper, we examine the benefits of preprocessing when the quality requirements are not very high, and perceptible distortion results. The use of data dependent anisotropic diffusion that maintains image structure, edges, and transitions in luminance or color is beneficial in controlling the spatial distribution of errors introduced by coding. Thus, the merit of preprocessing is for the control of coding errors. In this preliminary study, we only consider preprocessing prior to the use of the standard JPEG and MPEG coding techniques.
We have recently proposed the use of geometry in image processing by representing an image as a surface in 3-space. The linear variations in intensity (edges) were shown to have a nondivergent surface normal. Exploiting this feature we introduced a nonlinear adaptive filter that only averages the divergence in the direction of the surface normal. This led to an inhomogeneous diffusion (ID) that averages the mean curvature of the surface, rendering edges invariant while removing noise. This mean curvature diffusion (MCD) when applied to an isolated edge imbedded in additive Gaussian noise results in complete noise removal and edge enhancement with the edge location left intact. In this paper we introduce a new filter that will render corners (two intersecting edges), as well as edges, invariant to the diffusion process. Because many edges in images are not isolated the corner model better represents the image than the edge model. For this reason, this new filtering technique, while encompassing MCD, also outperforms it when applied to images. Many applications will benefit from this geometrical interpretation of image processing, and those discussed in this paper include image noise removal, edge and/or corner detection and enhancement, and perceptually transparent coding.
In the perceptually transparent coding of images, we use representation and quantization strategies that exploit properties of human perception to obtain an approximate digital image indistinguishable from the original. This image is then encoded in an error free manner. The resulting coders have better performance than error free coding for a comparable quality. Further, by considering changes to images that do not produce perceptible distortion, we identify image characteristics onerous for the encoder, but perceptually unimportant. Once such characteristic is the typical noise level, often imperceptible, encountered in still images. Thus, we consider adaptive noise removal to improve coder performance, without perceptible degradation of quality. In this paper, several elements contribute to coding efficiency while preserving image quality: adaptive noise removal, additive decomposition of the image with a high activity remainder, coarse quantization of the remainder, progressive representation of the remainder, using bilinear or directional interpolation methods, and efficient encoding of the sparse remainder. The overall coding performance improvement due to noise removal and the use of a progressive code is about 18%, as compared to our previous results for perceptually transparent coders. The compression ratio for a set of nine test images is 3.72 for no perceptible loss of quality.
The inadequacy of the classic linear approach to edge detection and scale space filtering lies in the spatial averaging of the Laplacian. The Laplacian is the divergence of the gradient and thus is the divergence of both magnitude and direction. The divergence in magnitude characterizes edges and this divergence must not be averaged if the image structure is to be preserved. We introduce a new nonlinear filtering theory that only averages the divergence of direction. This averaging keeps edges and lines intact as their direction is nondivergent. Noise does not have this nondivergent consistency and its divergent direction is averaged. Higher order structures such as corners are singular points or inflection points in the divergence of direction and also are averaged. Corners are intersection points of edges of nondivergent direction (or smooth curves of small divergence in direction) and their averaging is limited. This approach provides a better compromise between noise removal and preservation of image structure. Experiments that verify and demonstrate the adequacy of this new theory are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.