KEYWORDS: Fuzzy logic, Unmanned ground vehicles, Unmanned vehicles, Systems modeling, Control systems, Distance measurement, Control systems design, Target detection
Survivability has always been of interest in the defense of any armored vehicles. There has been many reports and papers on the survivability of U.S. Army ground vehicles. A Survivability severity model can be best described as the analogy to the layers of an onion, in which each layer of onion describes a different severity level and the phase of threat detected and severity to apply countermeasures. The objective of this paper is to suggest an evaluation tool that contains an algorithm and procedure for the reliability of manned and unmanned ground vehicles. A decision-making system is proposing for the theoretical survivability and is calculated from a threat level in the form of severity. A generic framework algorithm, consists of both linear and non-linear vehicle dynamics systems, and is included in this paper, which consists of the Fuzzy approach and various scenarios, based on straight path projection. Further, to increase the level of rigor, a layered fuzzy control system using various vehicle dynamics parameters [1] and a methodology for designing an adaptive hierarchical fuzzy model [2] and to accommodate various system parameters dependencies, are describing in this paper as a part of the survivability model. It is hoped that different users will tailor this evaluation tool and used extensively by various research workers working in different area. Several probabilistic cases were included in this paper and implemented by converting to linguistic fuzzy parameters to evaluate the algorithm. A simulation model is designed using supervisory fuzzy rule set and several simulation studies have been done which illustrate the effectiveness of the given approach. The result is a robust d flexible control system.
The necessity and occurrence of Unmanned Aerial Vehicles on the battlefield in near future is increasing day-by-day and that leads to the problem of sharing air space with manned air vehicles. However, avoiding collisions and/or deploying countermeasures during threat detection are a very crucial issue in most of the unmanned aerial vehicles [1] or similar. There is a need for a Sense, Avoid or React [2] system to track objects of potential collision risk or determine any action to avoid or mitigate a collision and, react with countermeasures after detection of hazardous situations i.e., during midair attack, collisions, flight path obstacles or dense clouds. The author of this paper present an algorithm for decision making system based on countermeasures during collision avoidance scenario or threat detection. A general framework to deal with non-linear dynamic systems and will developed consisting of a system of various collision and risks scenarios with moving and stationary threats that is based on straight future projection. The solution will include an algorithm that captures the path prediction of the threat. The proposed optimization problem resolution will aim to maintain minimum separation between two vehicles or threats and applying necessary countermeasures if the collision becomes unavoidable. The multi-sensor fusion system, will generate the object status signal, merges vehicle status, will release an assessment of collision in the form of warning level. A fuzzy controller for countermeasure of friendly maneuver will be presented to generate active and passive measure signals based on the response from assessment signal and vehicle sensor signals. The design implementations and simulations using FPGA will be included.
KEYWORDS: Field programmable gate arrays, Sensors, Very large scale integration, Ceramics, Fuzzy logic, Data acquisition, Signal processing, Nondestructive evaluation, Fuzzy systems, System identification
Interest has been shown in the problem of real-time crack detection, crack extent measurement and the identification of
the impact source causing the damage. A solution to the problem of impact source identification is presented using a
signal processing technique employing piezoelectric sensors. In order to detect the crack and to identify the source of the
impact, the Fuzzy logic approach is suggested. Based on the FLA approach, a procedure to develop the rule base is
given. The implementation of the rules is done using Hardware Description Languages (HDL) such as Verilog. The
procedure from Verilog to VLSI implementation is suggested. FPGA implementation and testing of the suggested
procedure is included. The problems for the future work on the development of VLSI to measure the crack and identify
the impact sources are given.
Problem of crack detection has attracted the attention of several investigators in the areas like defense, aeronautics, and
marine industries. In this paper we suggest fuzzy logic approach for detection of cracks and also deciding about the
severity of the crack. The data obtained from data acquisition system is processed and results presented by using various
software. Fuzzy rules are developed to determine the severity of the crack and a light controller used to indicate the
severity of the crack. The simplicity of the approach makes it very useful in many fields.
NASA has a serious problem with ice that forms on the cryogenic-filled Space Shuttle External Tank (ET) that could
endanger the crew and vehicle. This problem has defied resolution in the past. To find a solution, a cooperative
agreement was developed between NASA-Kennedy Space Center (KSC) and the U.S. Army Tank-automotive and
Armaments Research, Development & Engineering Center (TARDEC). This paper describes the need, initial
investigation, solution methodology, and some results for a mobile near-infrared (IR) ice detection and measurement
system developed by MDA of Canada and jointly tested by the U.S. Army TARDEC and NASA. Performance results
achieved demonstrate that the pre-launch inspection system has the potential to become a critical tool in addressing
NASA's ice problem.
Image fusion techniques have been used for variety of applications like medical imaging, navigation,
homeland security and most importantly in military requirements. Different techniques for image fusion are
there and already being extended for real time video fusion. In this paper, a new technique for video image
fusion has been given. We exploit fuzzy techniques for image fusion. This approach has already been
implemented for multi image fusion for different applications. In the fuzzy approach, pixel of one image is
fused with the corresponding pixel value of other image. Fusion is based on the associated rules and
membership grades of the frames. For the video image fusion, frames are extracted from the two incoming
videos and registered. Size and distortion of the frames are checked for the suitability of the fusion process.
After frame wise fusion using fuzzy approach, they are sequenced back for video display. Various other
issues like real time implementation, scene effect, adaptation required according to application and image
alignments have been discussed. We hope that algorithm developed for video image fusion process in this
paper will prove to be very effective for real time image sensor fusion process.
Commercial airplanes are now a weapon of mass destruction to be used in asymmetric warfare against the United States. There is a clear need for enhanced situational awareness within the passenger cabin of airplanes. If the crew suspected that the security of an aircraft had been compromised it would be critical for a crew member to be able to clearly and rapidly see what is occurring inside the passenger cabin without having to open the door to the cockpit. In case of emergency it would also be extremely valuable for ground personnel and aircraft responding to the emergency to be able to visually monitor what is happening inside the aircraft cabin.
Visible, infrared (IR) and sensor-fused imagery of scenes that contain occluded camouflaged threats are compared on a two dimensional (2D) display and a three dimensional (3D) display. A 3D display is compared alongside a 2D monitor for hit and miss differences in the probability of detection of objects. Response times are also measured. Image fusion is achieved using a Gaussian Laplacian pyramidal approach with wavelets for edge enhancement. Detecting potential threats that are camouflaged or difficult to see is important not only for military acquisition problems but, also for crowd surveillance as well as tactical use such as on border patrols. Imaging and display technologies that take advantage of 3D and sensor fusion will be discussed.
The use of near, mid wavelength and long wavelength infrared imagery for the detection of mines and concealed weapons is demonstrated using several techniques. The fusion algorithms used are wavelet based fusion and Fuzzy Logic Approach (FLA) fusion. The FLA is presented as one of several possible methods for combining images from different sensors for achieving an image that displays more information than either image separately. Metrics are suggested that could rate the fidelity of the fused images, such as, an entropy metric.
Thomas Meitzler, David Bednarz, Eui Sohn, Kimberly Lane, Darryl Bryk, Elena Bankowski, Gulsheen Kaur, Harpreet Singh, Samuel Ebenstein, Gregory Smith, Yelena Rodin, James Rankin
The fusion of visual and infrared sensor images of potential driving hazards in static infrared and visual scenes is computed using the Fuzzy Logic Approach (FLA). The FLA is presented as a new method for combining images from different sensors for achieving an image that displays more information than either image separately. Fuzzy logic is a modeling approach that encodes expert knowledge directly and easily using rules. With the help of membership functions designed for the data set under study, the FLA can model and interpolate to enhance the contrast of the imagery. The Mamdani model is used to combine the images. The fused sensor images are compared to metrics to measure the increased perception of a driving hazard in the sensor-fused image. The metrics are correlated to experimental ranking of the image quality. A data set containing IR and visual images of driving hazards under different types of atmospheric contrast conditions is fused using the Fuzzy Logic Approach (FLA). A holographic matched-filter method (HMFM) is used to scan some of the more difficult images for automated detection. The image rankings are obtained by presenting imagery in the TARDEC Visual Perception Lab (VPL) to subjects. Probability of detection of a driving hazard is computed using data obtained in observer tests. The matched-filter is implemented for driving hazard recognition with a spatial filter designed to emulate holographic methods. One of the possible automatic target recognition devices implements digital/optical cross-correlator that would process sensor-fused images of targets. Such a device may be useful for enhanced automotive vision or military signature recognition of camouflaged vehicles. A textured clutter metric is compared to experimental rankings.
Thomas Meitzler, Darryl Bryk, Eui Sohn, Kimberly Lane, David Bednarz, Daniel Jusela, Samuel Ebenstein, Gregory Smith, Yelena Rodin, James Rankin, Amer Samman
The purpose of this experiment was to quantitatively measure driver performance for detecting potential road hazards in visual and infrared (IR) imagery of road scenes containing varying combinations of contrast and noise. This pilot test is a first step toward comparing various IR and visual sensors and displays for the purpose of an enhanced vision system to go inside the driver compartment. Visible and IR road imagery obtained was displayed on a large screen and on a PC monitor and subject response times were recorded. Based on the response time, detection probabilities were computed and compared to the known time of occurrence of a driving hazard. The goal was to see what combinations of sensor, contrast and noise enable subjects to have a higher detection probability of potential driving hazards.
KEYWORDS: Fuzzy logic, Data modeling, Visualization, Wavelets, Systems modeling, Visual process modeling, Wavelet transforms, Target detection, Target acquisition, Fuzzy systems
The mean search time of observers looking for targets in visual scenes with clutter is computed using the Fuzzy Logic Approach (FLA). The FLA is presented by the authors as a robust method for the computation of search times and or probabilities of detection for signature management decisions. The Mamdani/Assilian and Sugeno models have been investigated and are compared. A 44 image data set from TNO is used to build and validate the fuzzy logic model for detection. The input parameters are the: local luminance, range, aspect, width, wavelet edge points and the single output is search time. The Mamdani/Assilian model gave predicted mean search times from data not used in the training set that had a 0.957 correlation to the field search times. The data set is reduced using a clustering method then modeled using the FLA and results are compared to experiment.
KEYWORDS: 3D displays, Visualization, Holography, Holographic optical elements, 3D acquisition, Projection systems, 3D visualizations, 3D modeling, Displays, Computer aided design
US Army Tank-Automotive Command researchers are in the early stages of developing an autostereoscopic, 3D holographic visual display system. The present system uses holographic optics, low and high-resolution optics, low and high- resolution projectors, and computer workstation graphics to achieve real-time, 3D user-interactivity. This system is being used to conduct 3D visual perception studies for the purpose of understanding the effects of 3D in military target visual detection and as an alternative technique to CAD model visualization. The authors describe the present system configuration, operation, some of the technical limitations encountered during the system development, and the result of a human perception test that compared subject response times, hit rates and miss rates of visual detection when subjects used conventional 2D methods versus the 3D holographic image produced by the holographic display system. The results of this study revealed that 3D HOE system increased the perception of accuracy of moving vehicles. This research has provided some insights into which technology will be the best for presenting 3D simulated objects to subjects or designers in the laboratory.
Infrared images in the 3 to 5 and 8 to 12 micron band were taken of soldiers wearing various camouflaged uniforms. The soldiers wearing the uniforms were either standing, crouched or prone. The images were presented to 37 observers and their detection decisions analyzed. Results were analyzed to determine which uniforms offered the most protection to a threat sensor at various ranges. The perception laboratory results were modeled using the Fuzzy Logic Approach and the CAMAELEON model with a resulting Pearson correlation of 0.9.
We present and demonstrate a method to characterize a background scene, to extrapolate the background characteristics into a specified target region, and to generate a synthetic target image with the visual characteristics of the surrounding background. The algorithm is based on a computational model of spatial pattern analysis in the front-end retinal-cortical visual system. It uses nonstationary multi-resolution spatial filtering to extrapolate the intensity and the intensity modulation amplitude of the surrounding background into the target region. The algorithm provides a method to compute the background-induced bias for use as a zero-reference in computational models of target boundary perception and shape discrimination. We demonstrate the method with a complex, heterogeneous scene containing many discrete objects and backgrounds. The contrast and texture of the visualization blends into the local background. In most cases, the target boundaries are difficult to see, and the target regions are difficult to distinguish from the background. The results provide insight into the capabilities and limitations of the underlying model to front-end human visual pattern analysis. They provide insight into scene segmentation, shape properties, and prior knowledge of scene organization and object appearance for modeling visual discrimination.
The probability of detection (Pd) of targets in infrared and visually cluttered scenes is computed using the Fuzzy Logic Approach (FLA). The FLA is presented by the authors as a robust and high fidelity method for the computation and prediction of the Pd of targets. The Mamdani/Assilian, Sugeno and Neurofuzzy-based models have been investigated. A limited data set of visual imagery has been used to model the relationships between several input parameters; the contrast, camouflage condition, range, aspect, width, and experimental Pd. The fuzzy and neuro-fuzzy models gave predicted Pd values that had 0.98 correlation to the experimental Pd's. The results obtained indicate the robustness of the fuzzy-based modeling techniques and the applicability of the FLA to those types of problems having to do with the modeling of human object detection and perception in any spectral regime.
KEYWORDS: Target detection, Visual process modeling, Target acquisition, Data modeling, Sensors, Visualization, Performance modeling, Sensor performance, Electro optical modeling, Signal to noise ratio
Wavelet transforms are currently being used for a number of applications such as cue feature and noise extraction form images and acoustic signals. The objective of this paper is to describe and apply the author's algorithm that uses wavelets for finding the clutter in infrared and visual images. Once the clutter is found, the probability of detection is calculated. The Reynolds identity and Tidhar's and Rotman's probability of edge metric are extended to encompass the wavelet methodology for multiscale clutter metrics in IR and visual images.
An extensive effort is ongoing to validate the TARDEC visual mode (TVM). This paper describes in detail some recent efforts to utilize the model for dual need commercial and military target acquisition applications. The recent completion of a visual perception laboratory within TARDEC is a useful tool to calibrate and validate human performance models for specific visual tasks. Some validation examples will be given for low contrast targets along with a description of the TVM and perception laboratory capabilities.
We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will not present the mathematics involved in this research. Instead, the focus of this paper is on the application of various techniques used to achieve our goal of successful recognition.
In this paper we make a comparison between wavelet transforms and the local cosine transform of various types of images. This builds on our previous work involving acoustic signals, where we found that the local cosine transform gave a more compact representation for certain types of signals and performed as well as wavelets for others. This held even for signals that were transient in nature, where one might expect the wavelets to do better. We are interested in determining if the same holds true for images, which tend to include many transients, such as edges. We are also investigating the extent to which the rms error can be used to evaluate the perceptual quality of the reconstructed images.
KEYWORDS: Visual process modeling, Spatial frequencies, Linear filtering, Target acquisition, Image filtering, Signal to noise ratio, Computer vision technology, Modulation, Visualization, Human vision and color perception
Target acquisition methodology for infrared (IR) and visual man-in-the-loop imaging sensors has several limitations for many sensor performance assessment applications. Recent advances in computational vision modeling (CVM) have made dramatic improvements in the understanding of early human vision processes. A simple model of neural receptive fields consists of a generic image representation of the spatial processing characteristics for early vision cortical areas. The input image is first divided into its three color opponent components with each axis further decomposed into a set of band pass spatial frequency filters with different center frequencies and orientations. The spatial frequency decomposition is accomplished by an efficient encoding algorithm incorporating a hierarchical cascading Gaussian pyramid algorithm which is an alternating sequence of image output passing through Nyquist low pass spatial filter and subsampling local operators for image encoding. This paper examines the limitations of earlier target acquisition models and describes a computational model which starts with actual stimulus images as an input. It predicts human performance of experimental tasks by attaching a signal-to-noise ratio (SNR) to each spatial frequency channel, and then uses a combining function to define a composite d' parameter for a signal detection theory calculation of probabilities of detection and false alarm. Several examples of the model are applied to various detection applications.
This paper presents a simulation and comparison of two different infrared (IR) imaging systems in terms of their use in automotive collision avoidance and vision enhancement applications. The first half of this study concerns the simulations of a `cooled' shortwave focal plane array infrared imaging system, and an `uncooled' focal plane array infrared imaging system. This is done using the United States Army's Tank-Automotive Research Development and Engineering Center's (TARDEC) thermal image model -- (TTIM). Visual images of automobiles as seen through a forward looking infrared sensor are generated, by using TTIM, under a variety of viewing range and rain conditions. The second half of the study focuses on a comparison between the two simulated sensors. This comparison is undertaken from the standpoint of the ability of a human observer to detect potential (collision) targets, when looking through the two different sensors. A measure of the target's detectability is derived for each sensor by using the TARDEC's visual model (TVM). The authors found the uncooled pyroelectric FPA to give excellent imagery and, combined with the advantages of the 7.5 - 13.5 band in the atmosphere and the higher blackbody exitance in the 7.5 - 13.5 band, the 7.5 - 13.5 uncooled sensor is therefore the better choice for imaging through numerous atmospheric conditions compared to the 3.4 - 5.5 cooled sensor.
This paper examines the applicability of computational vision models (CVM) to characterize thermal and visual imagery. A specific CVM model is described for the analysis of individual target characteristics and background clutter. A unique feature of the methodology is the spatial and temporal decomposition of the input image into various bandpass filters or channels. A description is given of the various model processes along with some representative examples of the subsequent analysis.
In this paper, we propose a first-order fused HMM-ANN (hidden Markov model and artificial neural net) classifier using feature vectors extracted from ground vehicle acoustic signals. The feature vectors applied in this paper are Fourier power spectrum and scale-invariant wavelet coefficients. Our fused classifier network robustly provides a better performance for a variety of ground vehicle acoustic signals when compared to a classifier with either HMM or ANN alone. We emphasize the use of scale-invariant wavelet transforms to extract scale-invariant wavelet coefficient features because they play a vital role in classifying and identifying unknown ground vehicle acoustic signals that are time-varying in scale structure.
In this paper we consider wavelet analyses of acoustic signatures of ground vehicles. We select two test cases, a squeak emanating from a tank track and the clatter of a tank running on pavement. We examine various cost functions that can be utilized in determining which set of wavelet basis functions give the most accurate representation of a given signal. We compare different orthogonal and biorthogonal wavelet transforms with each other and with the local cosine transform. We found that the local cosine transform performed better for the squeak than the particular wavelet packet transforms that we used, while they both gave approximately equivalent results for the clatter signal.
PCTTIM was developed under the joint sponsorship of TARDEC and the 7th Army Training Command in Germany as an instructional tool for the purpose of familiarizing thermal sight operators with a variety of vehicle types, thermal viewer types, and atmospheric effects. For this initial version the design goals were modest. We needed to present a user-friendly interface which allowed the operator to view a thermal image adjusted for atmospheric, optical, detector, and electronic effects. To this end, we took knowledge gained from implementing TTIM under Unix and built a simplified version on an Intel based PC system. PCTTIM is written in C++ (Borland C++ version 3.1 for Windows) and using Borland's Object Windows Library (OWL). This paper is divided into two major sections, a Model Description section and a Future Enhancements section. Each section is subdivided into user interface related issues and IR effects modeling issues. Under the Model Description section, the user interface sub-section is further subdivided by point of view. The student user's perspective is covered first, then the instructior user's perspective is covered.
An image based comparison of modeled IR cameras in the medium- wave (MW) and longwave (LW) bands is done using the TARDEC thermal image model (TTIM) and LOWTRAN7. A state-of-the-art staring focal plane array (FPA), a common module scanning FLIR, and a scanning dual-band sensor are modeled. The simulations using TTIM demonstrate the imaging performance of the cameras as well as the degradation caused by the atmosphere in the two bands. Atmospheric degradation to the image is simulated in rain and fog in northern hemisphere environments.
EO/IR/FLIR sensor performance models currently employ a thermal difference metric(s) to predict target detection, recognition, and identification ranges in conjunction with minimum resolvable temperature difference (MRT) curves. In this paper, we present a target, atmosphere, background, and sensor-specific (TABSS) thermal difference metric, minimizing shortcomings and deficiencies of other thermal difference metrics currently used in thermal imaging system performance models. This metric is parametrically compared with other (Delta) (Tau) metrics. We also investigate target, background, and scene pixel variances behavior as the scene maps to a fewer number of pixels, which reveals potential applications in clutter metrics as well as detection, recognition, and identification range predictions. Finally, we survey current status of sensor performance models to seek an application of the TABSS (Delta) (Tau) metrics. We find that this metric will enhance the current thermal imaging system performance models to accurately predict detection, recognition, and identification ranges not only when the thermal difference is large, but especially when the thermal difference is small.
The conventional area weighted average temperature (AWAT) (Delta) T is a primary performance measure for characterizing target/background scenes. However, the AWAT definition is widely recognized as being inadequate for representing observer sensitivity in many target detection and acquisition tasks. This situation is particularly true for targets which are at short ranges relative to the observer or viewed through powered optics. In these cases the mid and high spatial frequency components provide distinctive cue features which dominate over the average or aggregate characteristics of the target. The authors examine alternative definitions of (Delta) T in order to identify more robust and accurate metrics for the evaluation of sensor and signature countermeasure performance. The analysis indicates that target/background scene descriptions using simple average parameters such as the mean and standard deviation are not sufficient for characterizing imaging sensor performance against targets with internal texture and contrast gradients in background clutter.
The conventional area weighted average temperature (AWAT) (Delta) T is a primary performance measure for characterizing target/background scenes. However, the AWAT definition is widely recognized as being inadequate for representing observer sensitivity in many target detection and acquisition tasks. This situation is particularly true for targets which are at short ranges relative to the observer or viewed through powered optics. In these cases the mid and high spatial frequency components provide distinctive cue features which dominate over the average or aggregate characteristics of the target. The authors examine alternative definitions of (Delta) T in order to identify more robust and accurate metrics for the evaluation of sensor and signature countermeasure performance. The analysis indicates that target/background scene descriptions using simple average parameters such as the mean and standard deviation are not sufficient for characterizing imaging sensor performance against targets with internal texture and contrast gradients in background clutter.
The authors report the statistical analysis of infrared scenes containing a military ground vehicle. The purpose is to attempt to determine the important variables in clutter as well as the robustness of the present definition of clutter through computer simulation. Both variance based and texture based clutter metrics are compared. The authors analyzed both cluttered and non-cluttered scenes.
EO/IR/Laser detection of a target amidst clutter/background is a difficult problem often treated with simplistic models. Unlike noise, clutter is more complex, neither spectrally white nor statistically Gaussian. Therefore, it is insufficient to lump clutter with noise and use standard detection curves. Battelle has produced image randomization software called BATRAN (Background and Target Randomization) which computes various types of statistical distributions to randomize background and target pixels separately. The types of statistics implemented include exponential, Gaussian, log-normal, and Rice distributions for both the background and target. In an effort to identify a more robust and accurate (Delta) T metric definition for background and target matching, Battelle also developed a new (Delta) T metric definition and its equation using RMS pixel-based higher order statistics for the background and target signature pixel data in a scene image. This new (Delta) T metric provides a better estimate of true signature difference between the background/clutter and target, enabling more accurate matching of the background/clutter and target for use in sensor detection performance assessment.
The Army performance assessment methodology for visual and infrared countermeasures typically uses an area weighted average temperature/contrast (AWAT or AWAC) description of an equivalent uniform target in a uniform background scene. A simplistic target/background description is not sufficient to evaluate most signature countermeasure (CM) applications. This paper analyzes alternative strategies for using a realistic target/background model to achieve valid assessments of CM technology.
The authors report the statistical analysis of a digitized cluttered background scene containing a military ground vehicle. This is the first phase of a study to evaluate several sensors and scenes to generate a statistical measure of IR sensor performance based on the pixel by pixel correlation of the output imagery.
This paper investigates changes in target detectability due to alterations in target and clutter contrast structures brought about by sensor aliasing. A human observer model which is sensitive to structural contrast differences has been developed and is used to assess target detectability in aliased imagery generated by the TACOM Thermal Image Model. Results indicate that aliasing can influence target detectability.
The infrared exitance of steel plates with several emissivities are modeled using PRISM 3.0 and LOWTRAN7 under sky backgrounds representative of Middle East desert conditions in the summer. LOWTRAN7 is used to calculate the downward thermal radiance of a desert haze atmosphere with multiple scattering. PRISM 3.0 incorporates the results from LOWTRAN7 into annular rings that represent the temperature gradiant of the sky dome and predicts the apparent temperature of the plates in the 8 to 14 micron band. This study is part of a preliminary look at the issue of passive low observable technology for application to ground vehicles and an illustration of state-of-the-art computer-based background modeling and thermal simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.