Paper
30 April 1992 Neural networks for distributed sensor data fusion: the Firefly experiment
Robert Y. Levine, Timothy S. Khuon
Author Affiliations +
Abstract
An intuitive architecture for neural net multisensor data fusion consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. Each sensor is trained from a representative data set of the particular sensor to map to an hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. In this paper the sensor fusion architecture is applied to an experiment involving the multisensor observation of object deployments during the recent Firefly launches. The deployments were measured simultaneously by X-band, CO2 laser, and L-band radars. The range-Doppler images from the X-band and CO2 laser radars were combined with a passive IR spectral simulation of the deployment to form the data inputs to the neural sensor fusion system. The network was trained to distinguish predeployment, deployment, and postdeployment phases of the launch based on the fusion of these sensors. The success of the system is utilizing sensor synergism for an enhanced deployment detection is clearly demonstrated.
© (1992) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Robert Y. Levine and Timothy S. Khuon "Neural networks for distributed sensor data fusion: the Firefly experiment", Proc. SPIE 1611, Sensor Fusion IV: Control Paradigms and Data Structures, (30 April 1992); https://doi.org/10.1117/12.57912
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Sensors

Sensor fusion

Image fusion

Neural networks

Neurons

Radar

Infrared imaging

RELATED CONTENT


Back to Top