The problem of detection and trajectory tracking of a group of small objects by a passive vision system is solved. The system consists of several space-oriented optical receivers that monitor the viewing area is considered. We propose an approach based on the distribution of optical receivers over stereo pairs, taking into account the orthogonality of the lines of sight on objects, and the redistribution of stereo pairs in the case of receiver failures to solve this problem. An appropriate algorithm has been developed to improve the reliability of the system, the probability of detecting all objects, and the accuracy of determining spatial coordinates. The results of the experimental examination of the developed algorithm are presented.
One of the effective ways to improve object tracking performance is a fusion of base tracking algorithms to their advantages and eliminate disadvantages. This fusion requires the estimation of the performances of the base object tracking algorithms. So the real-time estimation of the performance of each base tracking algorithm is required for the algorithm result to be used for the fusion. In this paper we propose an algorithm for performance estimation for the object tracking algorithm based on the pyramidal implementation of Lukas-Kanade feature tracker.
The performance estimation is based on the analysis of the variations of the intermediate algorithm parameters calculated during object tracking, such as total and mean feature lifetime, eigenvalues, inter-frame mean square coordinate difference, etc. Different combinations of these parameters were tested to obtain the best evaluation quality. The statistic measures were calculated for the image sequence, one or two hundred frames long. These statistic measures are highly correlated with the algorithm performance measures, based on the ground truth data: tracking precision and the ratio of the false detected features. The experimental research was performed using synthetic and real-world image sequences. We investigated performance estimation effectiveness in different observation conditions and during image degradations caused by noise, blur and low contrast.
The experimental results show good performance estimation quality. This allows Lukas-Kanade feature tracker to be fused with another tracking algorithms (correlation-based, segmentation, change detection) to obtain reliable tracking. Since the approach is based on the intermediate Lukas-Kanade algorithm parameters, then it does not bring valuable computational complexity to the tracking process. So real-time performance estimation can be implemented.
Due to the fact that water surface covers wide areas, remote sensing is the most appropriate way of getting information
about ocean environment for vessel tracking, security purposes, ecological studies and others. Processing of synthetic
aperture radar (SAR) images is extensively used for control and monitoring of the ocean surface. Image data can be
acquired from Earth observation satellites, such as TerraSAR-X, ERS, and COSMO-SkyMed. Thus, SAR image
processing can be used to solve many problems arising in this field of research. This paper discusses some of them
including ship detection, oil pollution control and ocean currents mapping. Due to complexity of the problem several
specialized algorithm are necessary to develop. The oil spill detection algorithm consists of the following main steps:
image preprocessing, detection of dark areas, parameter extraction and classification. The ship detection algorithm
consists of the following main steps: prescreening, land masking, image segmentation combined with parameter
measurement, ship orientation estimation and object discrimination. The proposed approach to ocean currents mapping is
based on Doppler's law. The results of computer modeling on real SAR images are presented. Based on these results it is
concluded that the proposed approaches can be used in maritime applications.
KEYWORDS: 3D modeling, Image processing, 3D image processing, Field programmable gate arrays, Optical spheres, Binary data, Digital signal processing, Cameras, Detection and tracking algorithms, Error analysis
This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.