It is practical and efficient to simplify targets to point scatterers in radar simulations. With low-resolution radars, the
radar cross section (RCS) is a sufficient feature to characterize the scattering properties of a target. However, the RCS
totals the target scattering properties to a scalar value for each aspect angle. Thus, a more detailed representation of the
target is required with high-resolution radar techniques, such as Inverse Synthetic-Aperture Radar (ISAR). In
straightforward simulation scenarios, high-resolution targets have been modeled placing identical point scatterers in the
shape of the target, or with a few dominant point scatterers. As extremely simple arrangements, these do not take the
self-shadowing into account and are not realistic enough for high demands.
Our radar response simulation studies required a target characterization akin to RCS, which would also function in highresolution
cases and take the self-shadowing and multiple reflections into account. Thus, we propose an approach to
converting a 3-dimensional (3D) surface into a set of scatterers with locations, orientations, and directional scattering
properties. The method is intended for far field operation, but could be adjusted for use in the near field. It is based on
ray tracing which provides the self-shadowing and reflections naturally. In this paper, we present ISAR simulation
results employing the proposed method. The constructed scatterer set is scalable for different wavelengths enabling the
fast production of realistic simulations including authentic RCS scattering center formation. This paper contributes to
enhancing the reality of the simulations, yet keeping them manageable and computationally reasonable.
For some time, applying the theory of pattern recognition and classification to radar signal processing has been
a topic of interest in the field of remote sensing. Efficient operation and target indication is often hindered by
the signal background, which can have similar properties with the interesting signal. Because noise and clutter
may constitute most part of the response of surveillance radar, aircraft and other interesting targets can be seen
as anomalies in the data. We propose an algorithm for detecting these anomalies on a heterogeneous clutter
background in each range-Doppler cell, the basic unit in the radar data defined by the resolution in range, angle
and Doppler. The analysis is based on the time history of the response in a cell and its correlation to the
spatial surroundings. If the newest time window of response in a resolution cell differs statistically from the
time history of the cell, the cell is determined anomalous. Normal cells are classified as noise or different type of
clutter based on their strength on each Doppler band. Anomalous cells are analyzed using a longer time window,
which emulates a longer coherent illumination. Based on the decorrelation behavior of the response in the long
time window, the anomalous cells are classified as clutter, an airplane or a helicopter. The algorithm is tested
with both experimental and simulated radar data. The experimental radar data has been recorded in a forested
landscape.
Radars are used for various purposes, and we need flexible methods to explain radar response phenomena. In general,
modeling radar response and backscatterers can help in data analysis by providing possible explanations for
measured echoes. However, extracting exact physical parameters of a real world scene from radar measurements
is an ill-posed problem.
Our study aims to enhance radar signal interpretation and further to develop data classification methods. In
this paper, we introduce an approach for finding physically sensible explanations for response phenomena during
a long illumination. The proposed procedure uses our comprehensive response model to decompose measured
radar echoes. The model incorporates both a radar model and a backscatterer model. The procedure adapts
the backscatterer model parameters to catch and reproduce a measured Doppler spectrum and its dynamics at a
particular range and angle. A filter bank and a set of features are used to characterize these response properties.
The procedure defines a number of point-scatterers for each frequency band of the measured Doppler spectrum.
Using the same features calculated from simulated response, it then matches the parameters-the number of
individual backscatterers, their radar cross sections and velocities-to joint Doppler and amplitude behavior of the
measurement. Hence we decompose the response toward its origin. The procedure is scalable and can be applied
to adapt the model to various other features as well, even those of more complex backscatterers. Performance
of the procedure is demonstrated with radar measurements on controlled arrangement of backscatterers with a
variety of motion states.
During the last decade, the safety regulations of the airports have been set to a new level. As the number of
passengers is constantly increasing, yet effective but quick security control at checkpoints sets great requirements
to the 21st century security systems. In this paper, we shall introduce a novel metal detector concept that
enables not only to detect but also to classify hidden items, though their orientation and accurate location
are unknown. Our new prototype walk-through metal detector generates mutually orthogonal homogeneous
magnetic fields so that the measured dipole moments allow classification of even the smallest of the items with
high degree of accuracy in real-time. Invariant to different rotations of an object, the classification is based
on eigenvalues of the polarizability tensor that incorporate information about the item (size, shape, orientation
etc.); as a further novelty, we treat the eigenvalues as time series. In our laboratory settings, no assumptions
concerning the typical place, where an item is likely situated, are made. In that case, 90 % of the dangerous and
harmless items, including knives, guns, gun parts, belts etc. according to a security organisation, are correctly
classified. Made misclassifications are explained by too similar electromagnetic properties of the items in question.
The theoretical treatment and simulations are verified via empirical tests conducted using a robotic arm and our
prototype system. In the future, the state-of-the-art system is likely to speed-up the security controls significantly
with improved safety.
In the recent years, radar land clutter modelling and processing have been aided with Geographic Information Systems
(GIS) and geodata in a few recognised researches such as in the Lincoln Laboratory. In our clutter research, one aspect
is to study the possibilities of using GIS in clutter classification in Finnish environment. Since the automation of this
process causes inaccurate results and a need to identify and label various types of land clutter sources through
geographic data (geodata) exists, we propose an approach based on the visual interpretation of clutter. We have created
a graphical visualisation tool for merging geodata with radar data interactively, including an option to select the shown
type(s) of geodata. The source identification is based on the visual observation of the output. The tool can also be
utilised when verifying simulated data.
In an example case, we have used the following geodata items: a base map, a terrain model, a database of tall structures,
and a digital elevation model, but other types of geodata can be used as well. Although the potential to enhance the
model is higher when more types of geodata are utilised, even with few carefully selected geodata items, clutter sources
can be recognised adequately. This paper presents an illustrative demonstration using an air surveillance radar
recording. This visual approach with the data merging tool has been useful, and the results have verified the
practicability. The contribution of this paper focuses on supporting clutter classification research and improving the
understanding of land clutter.
Geographical information systems (GIS) have been the base for radar ground echo simulations for many years.
Along with digital elevation model (DEM), present GIS contain characteristics of terrain. This paper proposes
a computationally sensible simulation procedure to produce realistic radar terrain signatures in a form of raw
data of airborne pulse Doppler radar. For backscattering simulation, the model of the ground is based on DEM
and built with point-form backscattering objects. In addition to the usual DEM utilization for xyz coordinates
and shadowed region calculation, we assume that each data point in GIS describes several scatterers in reality.
Approaching the ground truth, we distribute individual scatterers with adjustable attributes to produce authentic
response of areas such as sea, fields, forests, and built-up areas. This paper illustrates the approach through an
airborne side-looking synthetic aperture radar (SAR) simulation. The results prove the enhanced fidelity with
realistic SAR image features.
The strength of radar response varies considerably. In this regard, the dynamic range of most receivers is not sufficient enough to operate optimally. Due to this fact, radar signal may represent only a fraction of the real backscattering phenomena. One way to solve the problem is to use automatic gain control (AGC). It helps to prevent the saturation of responses but inflicts performance degradation on subsequent radar signal processing. The same problem with dynamic range exists in other fields of sensing as well. For example, a solution in digital photography is to use various exposure times to determine the most appropriate one for the current conditions. In this paper, a corresponding approach is proposed for analyzing radar responses. The method requires measurements of a selected area to be performed with various gains, and the resulting dynamic ranges should overlap partially. The use of a linear receiver ensures that both the power and the coherent phase statistics can be extracted from the data. Using the proposed approach, a few distributions derived from extensive land clutter recordings from Finnish landscape are presented.
The detection and identification of hazardous chemical agents are important problems in the fields of security
and defense. Although the diverse environmental conditions and varying concentrations of the chemical agents
make the problem challenging, the identification system should be able to give early warnings, identify the gas
reliably, and operate with low false alarm rate. We have researched detection and identification of chemical
agents with a swept-field aspiration condenser type ion mobility spectrometry prototype. This paper introduces
an identification system, which consists of a cumulative sum algorithm (CUSUM) -based change detector and
a neural network classifier. As a novelty, the use of CUSUM algorithm allows the gas identification task to
be accomplished using carefully selected measurements. For the identification of hazardous agents we, as a
further novelty, utilize the principal component analysis to transform the swept-field ion mobility spectra into
a more compact and appropriate form. Neural networks have been found to be a reliable method for spectra
categorization in the context of swept-field technology. However, the proposed spectra reduction raises the
accuracy of the neural network classifier and decreases the number of neurons. Finally, we present comparison
to the earlier neural network solution and demonstrate that the percentage of correctly classified sweeps can be
considerably raised by using the CUSUM-based change detector.
This paper presents a method for generating volumetric clutter for air surveillance radar simulation. Complex
valued radar signal consists of magnitude and phase. In the presented simulation, radar clutter signal is created
from magnitude and phase distribution and then filtered imitating the radar signal formation. Radar geometry
can be integrated to the simulation by manipulating magnitude, phase, and phase difference distributions. Magnitude
is affected by range bin size and distance from radar. Also weather condition and polarization effect on
the signal. These can be controlled with adjustments to the distribution that the matrix is created from. This
solution offers a simple way to create background to realistic radar simulation. Different distributions are used
for signal magnitude and phase of various clutter sources. Typically, volumetric clutter source consists of many
evenly sized scatterers. Preliminary phase, originating from randomly distributed particles, can be considered
evenly distributed. Phase difference in long time, on the other hand, shows the radial movement of particles.
Therefore, phase difference can be modeled, for example, with Gaussian distribution and magnitude with Weibull
distribution, of course, depending on true environment. As an example, chaff is simulated with differing radial
wind.
KEYWORDS: Cameras, Surveillance, 3D modeling, Geographic information systems, Systems modeling, Visibility, RGB color model, Photography, Sensors, Imaging systems
Surveillance camera automation and camera network development are growing areas of interest. This paper
proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS)
when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class
model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account
spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines
visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation.
Image processing is aided by predicting certain advance features of visible terrain. The features include distance
from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The
performance of the approach is studied by comparing a photograph of Finnish forested landscape with the
prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes
apparent.
Classifier combinations can be used to improve the accuracy of demanding image classification tasks. Using combined classifiers, nonhomogenous images with noisy and overlapping feature distributions can be accurately classified. This can be made by classifying each visual descriptor first individually and combining the separate classification results in a final classification. We present an approach to combine classifiers in image classification. In this method, the probability distributions provided by separate base classifiers are combined into a classification probability vector (CPV) that is used as a feature vector in the final classification. The proposed classifier combination strategy is applied to the classification of natural rock images. The results show that the proposed method outperforms other commonly used probability-based classifier combination strategies in the classification of rock images.
In image classification, the common texture-based methods are based on image gray levels. However, the use of color information improves the classification accuracy of the colored textures. In this paper, we extract texture features from the natural rock images that are used in bedrock investigations. A Gaussian bandpass filtering is applied to the color channels of the images in RGB and HSI color spaces using different scales. The obtained feature vectors are low dimensional, which make the methods computationally effective. The results show that using combinations of different color channels, the classification accuracy can be significantly improved.
The use of image retrieval and classification has several applications in industrial imaging systems, which typically use large image archives. In these applications, the matter of computational efficiency is essential and therefore compact visual descriptors are necessary to describe image content. A novel approach to contour-based shape description using wavelet transform combined with Fourier transform is presented. The proposed method outperforms ordinary Fourier descriptors in the retrieval of complicated industrial shapes without increasing descriptor dimensionality.
KEYWORDS: Radar, Signal to noise ratio, Doppler effect, Signal detection, Target detection, Phase shift keying, Signal processing, Antennas, Fourier transforms, Detection and tracking algorithms
A method assuming linear phase drift is presented to improve radar detection performance. Its use is based on the assumption that the target illumination time comprises multiple coherent pulses or coherent processing intervals (CPI). For example in a conventional scanning radar, this often inaccurate information can be used for statistical data mapping to point out possible target presence. If coherent integration is desired in a beam-agile system, the method should allow sequential detection. Discussion involves a pragmatic example on the echo phase progress utilization in the constant false alarm rate (CFAR) processing of a moving target indication (MTI) system. The detection performance is evaluated with scanning radar simulations. The method has been tested using real-world recordings and some observations are briefly outlined.
KEYWORDS: Radar, Sensors, Signal processing, Signal to noise ratio, Doppler effect, Edge detection, Signal detection, Detection and tracking algorithms, Statistical analysis, Digital signal processing
Effectiveness of modern weapon systems demands that it is important to distinguish between friends and foes as early as possible. Radar signal based helicopter categorization is a challenging task for all types of radars. Airborne pulse Doppler radar with an appropriate digital signal processing unit has a good potential to perform categorization or even classification, providing that radar parameters are carefully fixed. Moreover, some information about the main rotor parameters of interesting helicopter types must also be known in advance.
The idea of this paper is to present a helicopter categorization method, which is based on estimates of the main rotor blade tip velocity and the time interval between successive main rotor blade flashes. Both incoherent integration and conventional coherent integration play an important role in the new method. Moreover, a new edge detection algorithm is applied to coherently integrated signal. Simulations are performed in order to show the effectiveness of the method.
Recently, the need to monitor restricted areas has increased.
Acoustics is one of the available key techniques but there are some restrictions and constraints to consider. In situations with unknown noise and low SNR the performance of time delay based direction of arrival (DOA) estimators collapses rapidly as SNR decreases. Outliers are introduced into estimation results when signals of interest are masked by noise.
There exist several methods for compensation of noise induced errors, such as averaging within subarrays, time delay selection or various
minimizations. These compensation methods provide an optimum solution with respect to some criteria, but are uneffective against large
errors in multiple time delays.
In this paper, we present a method for removing outlayers caused by errors in time delays. First, we utilize signal propagation speed to measure an error criterion for DOA estimates. Second, estimates with sufficiently large error criterion are identified as outlayers and discarded.
Effectiveness of our method is verified through experiments with
simulations and real data. In both cases we are able to identify and
discard outlayers and thus improve estimation reliability. Results
indicate that the given method can be used to gain efficiency and
robustness in DOA estimation applications, such as automatic acoustic
surveillance of large areas.
Recently, the need of monitoring parking places, airports, and harbours has increased. Microwaves, infrared based techniques, vision, or acoustics are the key techniques but each of them requires a specific kind of post-processing. Far field target localization methods based on Angle Of Arrival (AOA) often neglect the possibility of erroneous angle observations. Three different methods for increasing the accuracy of cross fixing based localization are compared. Average of the AOAs is easily corrupted by outliers, "m out of k"-selection of AOAs suffers from loss of data. Signal energy based target location circle is used to validate the cross fixing result, thus improving reliability. The energies of averaged target signals from two arrays are used to calculate a circle on which the target resides. Distance from the cross fixed location to the circle is used to validate the location. Experiments are carried out with simulated and real data.
Clustering of the texture images is a demanding part of multimedia database mining. Most of the natural textures are non-homogenous in terms of color and textural properties. In many cases, there is a need for a system that is able to divide the non-homogenous texture images into visually similar clusters. In this paper, we introduce a new method for this purpose. In our clustering technique, the texture images are ordered into a queue based on their visual similarity. Based on this queue, similar texture images can be selected. In similarity evaluation, we use feature distributions that are based on the color and texture properties of the sample images. Color correlogram is a distribution that has proved to be effective in characterization of color and texture properties of the non-homogenous texture images. Correlogram is based on the co-occurrence matrix, which is a statistical tool in texture analysis. In this work, we use gray level and hue correlograms in the characterization of the colored texture. The similarity between the distributions is measured using several different distance measures. The queue of texture images is formed based on the distances between the samples. In this paper, we use a test set which contains non-homogenous texture images of ornamental stones.
Clustering of the images stored in a large database is one of the basic tasks in image database mining. In this paper we present a clustering method for an industrial imaging application. This application is a defect detection system that is used in paper industry. The system produces gray level images from the defects that occur at the paper surface and it stores them into an image database. These defects are caused by different reasons, and it is important to associate the defect causes with different types of defect images. In the clustering procedure presented in this paper, the image database is indexed using certain distinguishing features extracted from the database images. The clustering is made using an algorithm, which is based on the k-nearest neighbor classifier. Using this algorithm, arbitrarily shaped clusters can be formed in the feature space. The algorithm is applied to the database images in hierarchical way, and therefore it is possible to use several different feature spaces in the clustering procedure. The images in the obtained clusters are associated with the real defect causes in the industrial process. The experimental results show that the clusters agree well with the traditional classification of the defects.
KEYWORDS: Data mining, Process control, Head, Autoregressive models, Signal processing, Time series analysis, Data modeling, Fourier transforms, Distance measurement, Control systems
Ordinary Time Series Analysis has long tradition in statistics [3] and it has also been considered in Data Mining [4,13]. Sequential patterns that are common in many measurements in process industry and in elsewhere have also been considered [1,14,11] in Data Mining. However, in some cases these two approaches can be merged together into a suitable transform. This kind of 2D-transform should be selected such a way that the basis functions support the Data Mining and the interpretation of results. As an example a runnability problem on a paper machine was considered. There were problems with fluctuations in paper basis weight. Data mining was successfully applied to the problem to identify and to remove the disturbances. The whole disturbance analysis was based on 86 sequential patterns consisting of 62 point-wise measurements in cross direction. These measurements were acquired from the process control system. The consecutive 86 patterns were Slant-transformed and the results were data mined. It was quite easy to find out the uneven static distribution of pressure in the head box and to find that the pressure fluctuated in the head box. Based on the considered case it can be claimed that Data Mining might be a good tool in many trouble shooting problems.
It is common that text documents are characterized and classified by keywords that the authors use to give them. Visa et al. have developed a new methodology based on prototype matching. The prototype is an interesting document or a part of an extracted, interesting text. This prototype is matched with the document database of the monitored document flow. The new methodology is capable of extracting the meaning of the document in a certain degree. Our claim is that the new methodology is also capable of authenticating the authorship. To verify this claim two tests were designed. The test hypothesis was that the words and the word order in the sentences could authenticate the author. In the first test three authors were selected. The selected authors were William Shakespeare, Edgar Allan Poe, and George Bernard Shaw. Three texts from each author were examined. Every text was one by one used as a prototype. The two nearest matches with the prototype were noted. The second test uses the Reuters-21578 financial news database. A group of 25 short financial news reports from five different authors are examined. Our new methodology and the interesting results from the two tests are reported in this paper. In the first test, for Shakespeare and for Poe all cases were successful. For Shaw one text was confused with Poe. In the second test the Reuters-21578 financial news were identified by the author relatively well. The resolution is that our text mining methodology seems to be capable of authorship attribution.
KEYWORDS: Process control, Image processing, Stochastic processes, Feature extraction, Signal processing, Robots, Visualization, Matrices, Cameras, Chemical elements
The demand on higher quality of products together with recent progress in texture recognition have made it possible to consider old processes in a new way. It has been common that the human operators have used visual appearance to control e.g. mixing, floating, or cooking processes. Now a methodology based on time series of textured images is reported and demonstrated. The methodology is aimed to help the human operators either to monitor or to control the processes. The main idea is that a sequence of images is taken. Each image in a sequence is interpreted and characterized as a texture image. The textured image is transformed. It is also possible that suitable features are extracted from the transformed texture image. The transformed images or the features are used together with the corresponding values from preceding and succeeding images to characterize the process. Some results are reported here. The key points how to apply the methodology are also discussed.
In many fields, for example in business, engineering, and law there is interest in the search and the classification of text documents in large databases. To information retrieval purposes there exist methods. They are mainly based on keywords. In cases where keywords are lacking the information retrieval is problematic. One approach is to use the whole text document as a search key. Neural networks offer an adaptive tool for this purpose. This paper suggests a new adaptive approach to the problem of clustering and search in large text document databases. The approach is a multilevel one based on word, sentence, and paragraph level maps. Here only the word map level is reported. The reported approach is based on smart encoding, on Self-Organizing Maps, and on document histograms. The results are very promising.
In this paper a special type of image segmentation, a two- class segmentation, is considered. Defect detection in quality control applications is a typical two-class problem. The main idea in this paper is to train the two-class classifier with fault-free samples that is an unexpected approach. The reason is that defects are rare and expensive. The proposed defect detection is based on the following idea: an unknown sample is classified as a defect if it differs enough from the estimated prototypes of fault-free samples. The self-organizing map is used to estimate these prototypes. Surface images are used to demonstrate the proposed image segmentation procedure.
A new approach to object recognition is proposed. The main concern is on irregular objects which are hard to recognize even for a human. The recognition is based on the contour of an object. The contour is obtained with morphological operators and described with a Freeman chain code. The chain code histogram (CCH) is calculated from the chain code of the contour of an object. For an eight-connected chain code an eight dimensional histogram, which shows the probability of each direction, is obtained. The CCH is a translation and scale invariant shape measure. The CCH gibes only an approximation of the object's shape so that similar objects can be grouped together. The discriminatory power of the CCH is demonstrated on machine-printed text and on true irregular objects. In both cases it is noted that similar objects are grouped together. The results of experiments are good. It has been shown that similar objects are grouped together with the proposed method. However, the sensitivity to small rotations limits the generality of the method.
A new operational system to interpret satellite images is represented. The described method is adaptive. It is trained by examples. In the reported application a combination of textural and spectral measures is used as a feature vector. The adaptation or learning of the extracted feature vectors occurs by a self-organizing process. As a result a topological feature map is generated. The map is identified by known samples, examples of clouds. The map is used later on as a code book for cloud classification. The obtained verification results are good. The represented method is general in the sense that by reselecting features it can be applied to new problems.
A method for automatic feature selection is described. The method is based on a suitable transform of an image and an estimated histogram of magnitudes of the transformed image. The estimation is done by a self-organizing process. The self-organizing process creates a one- dimensional topological feature map that is used as a feature vector. The method is demonstrated on four textured images.
Some neural network based methods for texture classification and segmentation have been published. The motivation for this kind of work might be doubted, because there are many traditional methods that work well. In this paper, a neural network based method for stochastic texture classification and segmentation suggested by Visa is compared with traditional K- means and k-nearest neighbor classification methods. Both simulated and real data are used. The complexity of the considered methods is also analyzed. The conclusion is the K-means method is the least successful of the three tested methods. The developed method is slightly more powerful than the k-nearest neighbor method for map sizes 9 X 9 and 10 X 10. The differences are, however, quite small. This means that the choice of classification method depends more on other aspects, like computational complexity and learning capability, than on the classification capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.