Determining a decision from data is an important DoD research area with far-reaching applications. In particular,
the long-elusive goal of autonomous machines discovering the relations between entities within a situation has
proved to be extremely dicult. Many current sensing systems are devoted to fusing information from a variety
of heterogeneous sensors in order to characterize the entities and relationships in the data. This leads to the
need for representations of relationships and situations which can model the uncertainty that is present in any
system. We develop mathematics for representing a situation where the relations are uncertain and use the work
of Meng to show how to compare probabilistic relations and situations.
We propose a mathematical formulation for a layered sensing architecture based on the theory of categories
that will allow us to abstractly define agents and their interactions in such a way that we can treat human and
machine (or systems of these) agents homogeneously. One particular advantage is that this general formulation
will allow the development of multi-resolution analyses of a given situation that is independent of the particular
models used to represent a given agent or system of agents. In this paper, we define the model and prove basic
facts that will be fundamental in future work. Central to our approach is the integration of uncertainty into our
model. Such a framework is necessitated by our desire to define (among other things) measures of alignment
and efficacy for systems of heterogeneous agents operating in a diverse and complex environment.
This paper addresses several fundamental problems that have hindered the development of model-based recognition
systems: (a) The feature-correspondence problem whose complexity grows exponentially with the number
of image points versus model points, (b) The restriction of matching image data points to a point-based model
(e.g., point based features), and (c) The local versus global minima issue associated with using an optimization
model.
Using a convex hull representation for the surfaces of an object, common in CAD models, allows generalizing
the point-to-point matching problem to a point-to-surface matching problem. A discretization of the Euclidean
transformation variables and use of the well known assignment model of Linear Programming renown leads to
a multilinear programming problem. Using a logarithmic/exponential transformation employed in geometric
programming this nonconvex optimization problem can be transformed into a difference of convex functions
(DC) optimization problem which can be solved using a DC programming algorithm.
Procrustes Analysis (least-squares mapping) is typically used as a method of comparing the shape of two objects. This method relies on matching corresponding points (landmarks) from the data associated with each object. Typically, landmarks are physically meaningful locations (e.g. end of a nose) whose relationship to the whole object is known. Corresponding landmarks would be the same physical location on the two different individuals, and therefore Procrustes analysis is a reasonable method of measuring relative shape. However, in the application of automatic target recognition, the correspondence of landmarks is unknown. In other words, the description of the shape of an object is dependent upon the labeling of landmarks, an undesirable characteristic. In an attempt to circumvent the labeling problem (without exhaustively computing the factorial number of correspondences), this paper presents a label-invariant method of shape analysis. The label-invariant method presented in this paper uses measurements which are related to the measurements used in Procrustes Analysis. The label-invariant approach of shape measurement yields near-optimal results. A relation exists between Procrustes Analysis and the label-invariant measurements, however the relationship is not one to one. The goal is to further understand the implications of the nearly optimal results, and to further glean these intermediate results to form a measure of shape that is efficient and one to one with the Procrustes metric.
Object-image relations (O-IRs) provide a powerful approach to performing detection and recognition with laser radar (LADAR) sensors. This paper presents the basics of O-I relations and shows how they are derived from invariants. It also explains and shows results of a computationally efficient approach applying covariants to 3-D LADAR data. The approach is especially appealing because the detection and segmentation processes are integrated with recognition into a robust algorithm. Finally, the method provides a straightforward approach to handling articulation and multi-scale decomposition.
The present era of limited warfare demands that warfighters have the capability for timely acquisition and precision strikes against enemy ground targets with minimum collateral damage. As a result, automatic target recognition (ATR) and Feature Aided Tracking (FAT) of moving ground vehicles using High Range Resolution (HRR) radar has received increased interest in the community. HRR radar is an excellent sensor for potentially identifying moving targets under all-weather, day/night, long-standoff conditions. This paper presents preliminary results of a Veridian Engineering Internal Research and Development effort to determine the feasibility of using invariant HRR signature features to assist a FAT algorithm. The presented method of invariant analysis makes use of Lie mathematics to determine geometric and system invariants contained within an Object/Image (O/I) relationship. The fundamental O/I relationship expresses a geometric relationship (constraint) between a 3-D object (scattering center) and its image (a 1-D HRR profile). The HRR radar sensor model is defined, and then the O/I relationship for invariant features is derived. Although constructing invariants is not a trivial task, once an invariant is determined, it is computationally simple to implement into a FAT algorithm.
KEYWORDS: Scattering, 3D modeling, Radar, 3D acquisition, 3D image processing, Automatic target recognition, 3D image reconstruction, Reflectors, Sensors, Databases
Automatic Target Recognition (ATR) is difficult in general, but especially with RADAR. However, the problem can be greatly simplified by using the 3-D reconstruction techniques presented at SPIE[Stuff] the previous 2 years. Now, instead of matching seemingly random signals in 1-D or 2-D, one must match scattering centers in 3-D. This method tracks scattering centers through an image collection sequence that would typically be used for SAR image formation. A major difference is that this approach naturally allows object motion (in fact the more the object moves, the better) and the resulting 'image' is a 3-D set of scattering centers scattering centers directly from synthetic data to build a database in anticipation of comparing the relative separability of these reconstructed scattering centers against more traditional approaches for doing ATR.
KEYWORDS: 3D modeling, Radar, Sensors, Image sensors, Scattering, 3D image processing, Motion models, Data modeling, Systems modeling, Synthetic aperture radar
Recent research in invariant theory has determined the fundamental geometric relation between objects and their corresponding 'images.' This relation is independent of the sensor (ex. RADAR) parameters and the transformations of the object. This relationship can be used to extract 3-D models from image sequences. This capability is extremely useful for target recognition, image sequence compression, understanding, indexing, interpolating, and other applications. Object/image relations have been discovered for different sensors by different researchers. This paper presents an intuitive form of the object/image relations for RADAR systems with the goal of enhancing interpretation. This paper presents a high level example of how a 3-D model is constructed directly from RADAR (or SAR) sequences (with or without independent motion). the primary focus is to provide a basic understanding of how this result can be exploited to advance research in many applications.
In this paper we present a new model-based feature matching method for an object recognition system. The actual matching takes place on a 2D image space by comparing a projected image of a 3D model with a sensor-extracted image of an actual target. The proposed method can be used with images generated by a wide variety of both camera and radar sensors, but we focus our attention on camera images with some discussions on synthetic aperture radar images. The effectiveness of the method is demonstrated only using point features. An extension to include region features should require some but not major revisions to the main structure of the proposed method. The method contains three phases to complete the target recognition process. The inputs to the method are a model projected image, a sensor-extracted image, an estimated current pose of the sensor with respect to a reference coordinate frame, and the Jacobian function associated with the estimated current sensor pose which relates 3D target features with 2D image features. The first stage uses geometric information of the target model to limit the number of possible corresponding feature sets, the second stage generates a set of possible sensor pose changes by solving a set of optimization problems, and the final stage finds the `best' change of sensor pose out of all possible ones. This change of sensor pose is added to the current sensor pose to form a new sensor location and orientation. The revised pose can then be used to reproject the model features and subsequently compute a compatibility measure between the model-projected and sensor-extracted images: this quantifies the reliability of the desired target recognition. In this paper we describe each of the three steps of the method and provide experimental results to demonstrate its validity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.