Paper
9 March 1999 Efficient 3D data fusion for object reconstruction using neural networks
Mostafa G. H. Mostafa, Sameh M. Yamany, Aly A. Farag
Author Affiliations +
Abstract
This paper presents a framework for integrating multiple sensory data, sparse range data and dense depth maps from shape from shading in order to improve the 3D reconstruction of visible surfaces of 3D objects. The integration process is based on propagating the error difference between the two data sets by fitting a surface to that difference and using it to correct the visible surface obtained from shape from shading. A feedforward neural network is used to fit a surface to the sparse data. We also study the use of the extended Kalman filter for supervised learning and compare it with the backpropagation algorithm. A performance analysis is done to obtain the best neural network architecture and learning algorithm. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained from shape from shading in terms of metric measurements.
© (1999) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Mostafa G. H. Mostafa, Sameh M. Yamany, and Aly A. Farag "Efficient 3D data fusion for object reconstruction using neural networks", Proc. SPIE 3647, Applications of Artificial Neural Networks in Image Processing IV, (9 March 1999); https://doi.org/10.1117/12.341108
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Filtering (signal processing)

Neural networks

3D image processing

3D metrology

Sensors

3D modeling

Computing systems

Back to Top