PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Temporal monitoring using remote sensing for topographic mapping requires continuous acquisition of image data. In many countries, but especially in the human Tropics, the heavy cloud cover is a major drawback for visible and infrared remote sensing. The research project presented in this paper uses the idea of integrating data from optical and microwave sensors using digital image fusion techniques to overcome the cloud cover problem. Additionally the combination of radar with optical data increases the interpretation capabilities and the reliability of the results due to the complementary nature of microwave and optical images. While optical data represents the reflectance of ground cover in visible and near-infrared, the radar is very sensitive to the shape, orientation, roughness and moisture content of the illuminated ground objects. This research investigates the geometric aspect of image fusion for topographic map updating. The paper describes experiences gained from an area in the north of The Netherlands (`Friesland') as calibration test site in comparison with first results from the research test site (`Bengkulu'), located on the south west coast of Sumatra in Indonesia. The data used for this investigated was acquired by SPOT, Landsat, ERS-1 and JERS-1.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new method for the computation of a disparity map between two stereoscopic satellite images. The disparities are computed along the x and y axis respectively at each point of the image, without the use of the epipolarity condition. In order to compute the disparity field, first a set of ground control points is detected in both images. Next, a mapping of the disparities over the entire image is done using the kriging method. Finally, the two stereoscopic images are geometrically registered using the disparity maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a complete method for pairing segmented areas on SPOT satellite images. Firstly, we present a segmentation algorithm which used a multi-scale edge detector and an edge closing process. Then, we propose a registration method without marker. This process is based upon the minimization of an energy function which can be computed after the segmentation step on the two images. Lastly we present a method for pairing sets of contiguous fields in the two images. The mapping of fields themselves is not possible because the fields may have merged (or be splitted into several fields) during the delay occurring between the two images. This method is also based upon the minimization of an energy function which is performed by the Mean Field Theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The knowledge of the geometrical relationship between images is a prerequisite for registration. Assuming a conformal affine transformation, 4 transformation parameters have to be determined. This is done on the basis of the geometrical arrangement of characteristic objects extracted from images in a preprocessing step, for example a land use classification yielding forest, pond, or urban regions. The algorithm introduced establishes correspondence between (centers of gravity of) objects by building and matching so-called ANGLE CHAINS, a linear structure for representing a geometric (2D) arrangement. An example with satellite imagery illustrates the usefulness of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work aimed at developing new multi-sources multi-scales classification techniques bringing solutions when dealing with time series of SPOT and ERS/SAR data. The developed multi-sources classification is a contextual classification based on Markov Random Fields (used as regularization models) and on simple or neural network driven multi-scales relaxation process. The main advantage is that it manages the absence or the distortion of optical classification parameters due to partial cloud cover in SPOT data. The results showed that the multi-sources multi-scales regularization induced a nearly total recovery of the forest hidden by clouds. Compared to a mono-source classification on ERS data the classification precision is improved. Compared to a monosource classification on SPOT data the results are significantly improved as soon as the local cloud cover rate is more than 7%. Therefore, when applied to time series of SPOT, LANDSAT TM and ERS/SAR data on the same area this process is able to provide an along time refined measurement of the forest cover, far less biased by noisy SPOT data. This is at least proved for wide-ranging environmental phenomena classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with nonlinear smoothing of multispectral satellite images of which the components generally are correlated. A literature search revealed the existence of two recent nonlinear multivariate noise smoothing filters. One is a coupled diffusion equation smoothing filter, which consists of simultaneously solving for each image component, a non-linear diffusion equation that is coupled with the other equations through a discontinuity function. The other is a so called FIR vector median hybrid filter, which in essence is a multivariate median filter. In this paper multivariate versions of three effective nonlinear grey level filters are also presented. Two of them are extensions of weighted local mean filters, the third computes an optimal linear combination between the multivariate pixel vector and its local mean vector. The noise reduction and edge preserving capabilities of these five filters are evaluated in view of an additive noise model for SPOT HRVIR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing applications are often characterized by a high degree of complexity. The large amount of data to be processed and the high degree of uncertainty inherent in this processing make it necessary to actively select the most useful information. We introduce a general framework, called `active fusion', that actively selects and combines information from multiple sources in order to arrive at a reliable result at reasonable costs. An outline is given of how to implement such a framework using Bayesian networks and decision theoretical techniques. Finally, we develop a number of future scenarios where such an active fusion component might be useful for remote sensing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper the use of two auto-adaptive information-fusion methods for a satellite image classification problem. These methods come from possibility theory. Several information-fusion methods are available for different kinds of problems. Auto-adaptive fusion allows to have a fusion which modifies its behavior according to information to be merged. It has a conjunctive behavior when sources agree, and it turns to disjunctive behavior when conflict between sources turns greater. In our image processing application, we have used conjunctive fusions so far because sources usually agree on the choice of a class for a pixel. But when we increase the number of sources, we increase by the same time the difficulty to find a common choice from all sources about a pixel. So a disjunctive fusion would be much appropriate for this pixel. An auto-adaptive fusion is able to apply a conjunctive fusion for pixels without conflict, and is able to turn to a disjunctive fusion as conflict between sources increases. This makes a better classification than a simple conjunctive fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In contrast to previous studies using a single study area with many classes and restricted topographic parameters this paper examines variations in image radiometry due to slope in relation to a single vegetation class growing on all azimuths (0 degree(s) to 360 degree(s)) and slopes from 10 degree(s) to 60 degree(s). It demonstrates that the non-Lambertian Minnaert model was able to produce substantially better results than more traditional approaches on the cypress and pine forests covering the gorges of southwest Crete. These landforms represent extreme geographic features and include the Samaria gorge which is the largest in Europe. To improve the understanding of the model, a sensitivity analysis to evaluate the effect of the main variables known to affect the Minnaert `K' constant was performed. Three gorges were studied using three SPOT images: SPOT-1, August 23, 1986; SPOT-1, April 9, 1987 and SPOT-2 August 30, 1991. The values 0.4, 0.5 and 0.6 were proposed as the K constants of the study class for SPOT bands 1, 2 and 3. Regardless of gorge, image and data, these values produced excellent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for reconstructing a 3D model of an object from 2D images by analyzing emission polarization is presented. Using the result of Stokes vector analysis from 2D images of a surface, the normals to that surface at a set of points over the image can be calculated. The reconstruction method is presented, together with a discussion of the affects of errors and some practical limitations of the technique. Finally some example reconstructions are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the instantaneous field of view of a scanning device often more than one object is included, resulting in a pixel in which several characteristics are mixed. Classically the proportions of the components of such a mixed pixel are estimated using a linear mixture model. In this paper a new method is introduced for estimating the characteristics of these components, from which their proportions can be derived. Experiments with simulated data sets are conducted to compare the methods regarding their accuracy on estimating the proportions. In addition it is determined how well the proposed method can estimate the characteristics of each component.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address a new approach to the remote sensing imaging problems for airborne/spaceborne radar/SAR imaging systems stated and treated as ill-posed inverse problems of restoring of extended object signals distorted in a random scattering medium. The developed approach is based on combining the Bayesian estimation technique for signal restoration problems with the constrained regularization method for the inversion of the signal formation operator of the stochastic data measurement channel. The model based fusion of diverse information on data, system and image properties enables us to conceptualize a system/problem-oriented formalism of a problem, and develop the robust numerical technique for earth surface imaging in the scattering atmosphere with an improved resolution and accuracy. Some computer simulation results are also provided to illustrate the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe an interpretation tool likely to identify the arcs of a network generated by an automatic road network extraction system. This system is based on the variable use of various extraction methods: intensive for low-level processes, restricted for higher-level processes. A very special attention is drawn to the efficiency evaluation of this high-level module and to the modelization of the different objects of a scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic computation of 3D reconstruction of scenes is traditionally based on the use of the normalized cross-correlation technique to match stereoscopic images. This matching technique called area-based matching technique allows to retrieve which pixels on images are the projection of the same 3D point of the analyzed scene. In the case of high resolution stereoscopic images which include a ground resolution up to a few decimeters, the matching problem is more difficult because of the presence shadow areas, hidden parts, important discontinuities in the 3D structures and textureless or repetitive-texture regions. These characteristics of high resolution stereoscopic images appear as real obstacles to the area-based matching technique with binocular stereovision approach. In this paper, we present a novel method to compare a dense scene 3D reconstruction from a large number of aerial images. It achieves robust and accurate reconstruction and deals with local occlusions and surface discontinuities. The principle of the algorithm is based on the simultaneous matching of images with a cross-correlation technique. The location of each cameras is unconstrained and a calibration stage is used to retrieve the epipolar geometry. We finally show the feasibility of this approach to produce robust and accurate matchings on results achieved with synthetic and real images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar is an active ranging sensor that is very sensitive to terrain slope. From available information included in radar image pixel, a Digital Elevation Model (DEM) may be generated. The radargrammetry is a method that derives a topographic map from two overlapping radar images, a set of homologous points and platform motion and geometry parameters. This method is based on the parallax in range direction. The stereoscopic radar image analysis consists of three main processing steps: (1) selection and plotting of homologous points, (2) elevation calculation based on the geometry of radar images (height and distance between the antennas, and look angles), and (3) elevation interpolation in order to generate a DEM. We develop an automatic research of homologous points location in the two images, taking into account geometry of radar image acquisition and curvature of the planet. This is based on automatic analysis of shape recognition determined by a threshold on pixel radiometric gradient. We use this method by an iterative way. Their elevation is calculated and interpolated on a regular grid to generate a DEM. The elevation accuracy depends on spatial resolution. This method has been tested on AIRSAR and Magellan images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology for pixel classification applied to METEOSAT images. It will be used as the first step in the derivation of high space/time resolution ERB (Earth Radiation Budget) images. The classification method combines temporal and spatial analysis with fixed prior knowledge (in the form of a surface properties database) to obtain a final robust unsupervised cloud recognition. The different algorithms which are combined in this methodology are time series greyscale morphology, k-means clustering, median filtering and iterative Bayesian clustering based on discriminant analysis using prior probability functions. The combination of these different algorithms realizes a fuzzy combination of complementary data in a multi-stage (gradual) classification-decision-making process. The outputs of the different classification stages can be validated autonomously. Technical novelties in this approach reside as well in the use of the different algorithms as in the way to combine them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method is proposed for clustering remotely sensed multispectral images. This method has a binary division process in which division boundaries are determined by an algorithm of linear discriminant function. In order to realize high speed processing, image data are compressed and projected onto a 2D subspace. Then, the image data are repeatedly divided into groups until stopping conditions are satisfied. In this method, the optimal number of clusters are automatically determined accordingly to the statistical property of the image data. The method has higher speed than ISODATA does, and is successfully applied to actual multispectral images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper the problem of detecting land cover changes by using multitemporal remote sensing images is addressed. An approach aimed to explicitly identify what kind of land cover transition has actually taken place in an area proposed. This approach is based on the compound classification of multitemporal images. In particular, a simple model to represent the probabilities of transition is exploited to strongly simplify the compound classification task. The effectiveness of the proposed approach is confirmed by experimental results obtained by using remote sensing images containing simulated land cover transitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The k-NN rules and their modifications offer usually very good performance. The main disadvantage of the k-NN rules is the necessity of keeping the reference set (i.e. training set) in the computer memory. In the present paper a method is proposed to reduce the size of the reference set without decreasing the classification quality. Ten different experiments with very large real data sets were performed to check the effectiveness of the new approach. Each experiment involved 5 classes, 15 features, 2440 objects in the training set and 6399 objects in the testing set. The obtained results show that the decision rule based on the condensed reference set can offer even better classification quality than the one derived from the original data set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the application of Neural Networks (ANN) and introduces Genetic Algorithms (GA) to agricultural land use classification. Daedalus ATM data at 1 m resolution, has been used to train and test the algorithms. Layered feed forward ANN's have been found to have good generalization properties. The Backpropagation (BP) algorithm is very susceptible to initial conditions and the problem of local minima. Therefore this technique alone is not the best method for the classification of complex multi-dimensional data sets. This paper applies an evolutionary technique for training feed forward ANN's, which searches the error space for a more likely initialization point. Optimization and learning problems are two techniques where ANN's and GA's have excelled. Evolutionary Artificial Neural Networks, introduced in this paper, can be thought of as being a cross between ANNs and GAs. The weights and biases are updated by applying the mutation genetic operator and can be compared with the principle of natural biological life, where survival of the fittest leads to a near optimum ANN. These weights and biases are then adopted by the BP algorithm to quickly converge on the global minima.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of a neural network for determining the change of landcover/land-use with remotely sensed data is proposed. In this study, a single image contains both spectral and temporal information is created from a multidate satellite imagery. The proposed change detection method can be divided into two main steps: training data selection and change detection. At the training step, the training set, basically consists of the classes of no-change and possible change data, is obtained from the composited image. Then the training data is used to input the neural network and obtain the network's weights. At the change detection step, the network's weights is employed to detect the change and no-change classes in the combined image. The proposed method is tested using a multidate SPOT imageries and a satisfied change pattern detection is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel method of forming a training set to yield acceptable classification results is addressed in this paper. Since the development of several sophisticated satellites, remotely sensed data classification has become important in environmental study. The two major classification approaches are the neural approach and the conventional statistical one. The prior and crucial process carrying out any classification technique is the selection of the training samples forming the learning based. To be successful, the learning base must be representative enough of the studied region. However, the more the land cover resolution of the satellites increases, the more difficult it is to cope with these conditions. In this study, an incremental learning base forming process is presented. The method is based on the `small and growing' concept. From a small data base constituted carefully and manually by the selection of a few small areas for each class, spectral and contextual criteria are defined. Furthermore, the number of detected classes is validated in order to take into account all the important categories to be classified. Finally, the criteria are associated with the initial base and the initial classification to incorporate new patterns in the learning data base. The proposed method is flexible enough to form a good learning base, and proves to be successful in any complex image. Moreover, addition only is required to form the learning base, and not, like in other similar incremental methods, revised learning set through merger, deletion or addition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the results of the terrain cover classification from satellite imagery from multispectral SPOT high resolution visible images and ERS-1 C-band SAR image. Fractal image was extracted using, from SAR, a wavelet transform as texture measure. The use of SAR fractal image to combine with SPOT data for terrain cover classification is proved to be effective and efficient, in that for SAR the despeckle process is avoided and thus naturally preserves its texture information. It was found that fractal information significantly improves the discrimination capability of the heterogeneous areas such as in urban regions, while it slightly degrades accuracy for homogeneous areas, such as open water. The overall classification performance is superior to results obtained using intensity image only.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of image segments featuring low contrast is a task related to the behavior of the mammalian visual system which localizes contours where changes of contrast occur. In the first part, this paper describes how early-vision mechanisms in the mammalian visual system detect local changes in the image intensity gradient. Two definitions are proposed: (1) biological plausible contour detection algorithm; and (2) biologically compatible segmentation algorithm. In the second part of this paper, a new segmentation method, which features biological compatibility, is presented. This procedure detects image regions characterized by Low Contrast (LC) values and it is named the Low Contrast Segmentation (LCS) algorithm. LCS employs an iterative pairwise mutually best merge criterion to merge segment pairs, and the Normalized Vector Distance (NVD) metric to provide a normalized distance measurement between pairs of multivalued vectors. The relevant aspects of NVD is that it supports the independent detection of chromatic and achromatic contrast, which are further combined into a single contrast coefficient. Therefore, NVD makes LCD able to process multispectral as well as monochromatic images. In terms of user interaction, LCS is robust and easy to use, because it requires only two user-defined parameters, both having an intuitive physical meaning and featuring adaptively to local statistics. An example shows the LCS performance in comparison with those of other segmentation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear filters are widely used in remote sensing image processing, such as smoothing, edge detection, feature extraction and wavelet analysis. In the present paper, we present a new method based on orthogonal polynomial integration theory to realize linear filters with a reduced and constant complexity and with a good precision. We at first introduce the orthogonal polynomial integration theory and generalize it for convolution calculation. We then present the construction of the orthogonal functions for a given filter, which is a key problem for the generalization of our method. To apply the method proposed to edge detection, we present, in particular, Laguerre integration method to implement the symmetrical exponential filter, an optimal filter for edge detection. Generalization to M-D cases and to derivative calculation is presented as well. Edge detection with subpixel precision by use of Laguerre integration is addressed. Experimental results for real images are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A non-supervised, autoadaptive cloud identification scheme for mono-spectral Meteosat data is presented. The identification of clouds is equivalent to the assignment of meteorological meaningful labels to cloud regions. Automated cloud region detection is reduced to the problem of finding an algorithm that performs a data reduction on Meteosat images while optimally preserving cloud region information. A self-organizing 1D feature map applied to random segments of individual Meteosat channels is shown to meet the requirements of such algorithm. A study of the segment size indicates that small segment sizes are sufficient and even better than large segment sizes for consistent mono-spectral cloud region detection. This is explained in terms of the statistical properties of Meteosat images and the structural features of the code vectors (code segments) in the topological map. Decreasing the number of code segments used to reduce the information content of Meteosat channels results in a systematic, consistent loss of cloud region information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper consists of two parts. First a system is proposed that performs a recognition operation of single aerial photographs using a shape-constrained segmentation method and least squares estimation for the parameters of buildings. In the second part this system is extended to multi-view imagery. The advantage of using multi-view imagery is that buildings not recognized from one image (e.g. due to occlusion) may still be recognized, because they are observable from other images. However, this also complicates processing, because objects recognized from different images should correspond if they originate from the same physical objects. A solution to this correspondence problem is presented on object hypothesis level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion study for remotely sensed data is a wide research field, involving tracking of structures, measurement of evolution and forecast. Several difficulties arise when tracking a vortex within oceanographic images: the structure is complex, and its evolution involves large changes of shape and topology. Therefore, the classical approach of motion based on the `small deformations' hypothesis does not hold: one must add exogeneous information, concerning the underlying physical phenomenon for instance. This information may be unavailable, or so complicated that a numerical treatment can not be carried out. We propose a surface based model that performs a global matching without relying on local features. It is defined by geometrical constraints that are a simplified approximation of the evolution model of the structure. This model is also applied to track a vortex within oceanographic images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image interpretation method is presented for the automatic processing of aerial pictures of a urban landscape. In order to improve the picture analysis, some a priori knowledge extracted from a geographic map is introduced. A coherent graph-based model of the city is built, starting with the road network. A global uncertainty management scheme has been designed in order to evaluate the final confidence we can have in the final results. This model and the uncertainty management tend to reflect the hierarchy of the available data and the interpretation levels. The symbolic relationships linking the different kinds of elements are taken into account while propagating and combining the confidence measures along the interpretation process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the task of remote-sensing image analysis as an unsupervised learning task. Images are usually (very) large, and represent complex objects. Unsupervised learning, or clustering, may be of great help at several phases of the analysis. First, this paper describes a clustering algorithm. Then, the application of this algorithm to the segmentation phase is demonstrated. It is then argued that radiometry is insufficient to fully understand the scene in thematic terms. The next level of complexity is related to the incorporation of spatial information. This paper shows how this kind of data can be expressed. Clustering is then extended to deal with such complex, structured data. Experiments are provided to assess the validity of the approach. The set of experiments proves that clustering is a fundamental tool of remote-sensing image analysis, and that its scope may well be larger than was initially expected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider SAR segmentation as an important step for the operational use of satellite SAR imagery in routine mapping exercises. The use of multi-temporal SAR imagery in this respect is of specific interest in areas where optical data are difficult to obtain, due to prevailing weather conditions. For areas where timely optical data are available, a hybrid approach can be adopted, still using the same segmentation algorithm described in this paper. In this paper we present the results of the application of a generic segmentation method on multi-temporal ERS-1 SAR imagery of the Dutch Flevoland agricultural area. The data were recorded during the fall of 1991, and constitute a series of 7 co-registered PRI images. Before segmentation, the data are filtered, using a maximum a priori filter technique and then byte-scaled to allow segmentation of any combination of (temporal) channels. We will evaluate the various channel combinations with respect to segmentation efficiencies. The results are compared to an existing data base of fixed field boundaries and a vector map of 1991 field boundaries derived from optical data sets (SPOT). Later we will compare the quality of field averaged PRI data extracted with polygons generated in the segmentation procedure with that from manually digitized field boundaries. One of our final objectives is to automatically generate multi-temporal backscattering signatures for the training of both supervised classification by means of neural networks [9] and supervised tillage monitoring [5]. Especially the potential to significantly advance the time of earliest estimates of crop acreage, by combining results from segmentation and knowledge based classification, is of interest in this framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate the use of wavelet transforms to texture segmentation of Remotely Sensed images. The method adopted is multiresolution with maximum overlap. Various wavelet filters are considered (two different types of Daubechies, Battle-le Marie filters and Haar). To investigate the usefulness of these filters and the relevance of the various resolution levels, we introduce a novel probe: For the feature derived from a certain filter combination, we calculate the 2-point correlation function in the feature domain. This function allows us to judge whether this particular feature segregates the data into clusters or not. We also show that it gives an indication of the number of clusters present in the feature space. At the end we identify the useful features and perform image segmentation using all of them with the help of a C-means clustering technique. We conclude that the most useful results are obtained by using the Daubechies coiflet filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new data-driven weights initialization method for the back-propagation learning algorithm is proposed based on the generation of only those hyperplanes which are cutting the input data feature space. It allows to speed up the training of the learning algorithm and to decrease the possibility of getting trapped in a local minimum. The conventional way of weights initialization and the new method of weights initialization are investigated for synthetic XOR data and real remote sensing data, SAR. The back-propagation with the new weights initialization method showed the ability to provide consistently better results than the conventional way of weights initialization for the data investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unlike biological vision, most techniques for computer image processing are not robust over large samples of imagery. Natural systems seem unaffected by variation in local illumination and textures which interfere with conventional analysis. While change detection algorithms have been partially successful, many important tasks like extraction of roads and communication lines remain unsolved. The solution to these problems may lie in examining architectures and algorithms used by biological imaging systems. Pulsed oscillatory neural network design, based on biomemetics, seem to solve some of these problems. Pulsed oscillatory neural networks are examined for application to image analysis and segmentation of multispectral imagery from the Satellite Pour l'Observation de la Terre. Using biological systems as a model for image analysis of complex data, a pulsed coupled networks using an integrate and fire mechanism is developed. This architecture, based on layers of pulsed coupled neurons is tested against common image segmentation problems. Using a reset activation pulse similar to that generated by sacatic motor commands, an algorithm is developed which demonstrates the biological vision could be based on adaptive histogram techniques. This architecture is demonstrated to be both biologically plausible and more effective than conventional techniques. Using the pulse time-of-arrival as the information carrier, the image is reduced to a time signal, temporal encoding of imagery, which allows an intelligent filtering based on expectation. This technique is uniquely suited to multispectral/multisensor imagery and other sensor fusion problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellite pushbroom scanners deviate from their predetermined positional and rotational trajectories causing geometric distortion in their scanned imagery. Attitude and orbit control systems usually supply sufficient data for the actual trajectory to be reconstructed through splined interpolation. Geometric correction of imagery requires that image pixels be retroprojected onto the scene surface from points along the reconstructed trajectory and the scene subsequently resampled in a regular tessellation. Since this retroprojection can be very computationally expensive, a trajectory model is used which facilitates an efficient iterative subsampling ray-tracing algorithm. Actual SPOT satellite trajectory data is used for demonstration purposes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of identifying Posidonia oceanica areas in multispectral aerial seacoast images. P. oceanica is a marine phanerogam endemic to the Mediterranean. Several diving have been performed in order to get information at specific locations. At each point (context point), a measure of depth has been made and the presence or absence of P. oceanica has been noticed. The first difficulty is to separate the sea from the coast. We propose a least square technique to distinguish sea points from the others. Then, the major problem is that the color of P. oceanica in shallow water is the same as the color of the sand in deeper water. We proceed in two steps: for each point of the sea, the three closer context points are used to compute a depth map. Then, the sea points are split in three categories, with respect to the depth. A classification is learned from the context points, and the final segmentation is obtained by generalizing to all points of the sea.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation consists of partitioning an image into meaningful regions. Two classical approaches are usually employed: the edge-based methods looking for the local grey level discontinuities in the image and the region-based ones seeking parts of the image which are homogeneous in some measurable property such as grey levels, contrast or texture. In the context of high resolution satellite image segmentation, it seems more and more difficult to obtain a faithful automatically segmented image using only one of these approaches. Due to the high complexity of contents of remote sensing images, the current tendency consists in the cooperation of both techniques to alleviate the problems related to each of them taken separately. This paper describes a hierarchical region-based image segmentation scheme combining several powerful tools derived from the mathematical morphology theory and a region growing process. The morphological watershed transformation gives access into the image to highly homogeneous grey level regions producing unfortunately a typical severe oversegmentation. A region-region linkage type growing process is then employed to improve the over-fine segmentation by merging adjacent regions. Two different approaches are employed to measure the similarity between regions. This algorithm has been applied successfully to different types of remote sensing imagery and to a variety of landscapes. All these results show the potentialities offered by the mathematical morphology tool in the field of remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an automatic method adapted to the detection of thermal fronts by the analysis of images whatever the geographical zones. This method is composed of two stages. Firstly, the image is locally thresholded in order to localize the candidate points of the fronts; secondly, a contour following is carried out in accordance with the priority criterion associated with each candidate extension point. The method developed has been tested on images obtained from the Advanced Very High Resolution Radiometer aboard the NOAA-11 satellite. The results obtained are similar to those issued from manual detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automated building detection system has been developed previously based on a line- relation-graph. This paper describes improvements made on this building detection system originally developed for aerial imagery. Major improvement has been achieved by introducing a `super' building hypothesis, which refers to a building hypothesis formed from a `U'-shaped chain of lines. This paper also reports experiments on automated building detection from 2 m resolution spaceborne imagery (DD5) using the improved system. Although small buildings could not be extracted (as they were not visible), the improved system showed strong feasibilities of (fully) automated extraction of large buildings from DD5 imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a SPOT image, urban areas generally appear as agglomerates of numerous little uniform regions. So, they have a typical feature which is a high edge density. In a single sweeping of the image, each edge pixel is tested: if all the surfaces of neighboring regions are less than a predetermined threshold, the current edge pixel is removed. At the end of sweeping, all the internal edges of urban regions are removed but the external boundary or silhouette is kept. This method has been successfully tested on SPOT XS3 images of the region of Bourges, France.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To provide a quantitative measure of the quality of a segmentation of an image a `true' segmentation must be known and the differences between the two segmentations must be transformed into one or more quality values. A method is described to generate a realistic satellite image and its true segmentation to sub-pixel level using ground truth data and a real image. Quality measures are described which evaluate two kinds of errors: the splitting of a real field into more than one segment and the merging of pixels from different fields into a segment. Results for various segmentation methods are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mapping techniques frequently make use of 2D or 3D restitution process of remotely sensed images obtained by airborne or spaceborne sensors. The development of these techniques is favored by an increasing demand of geographical information and by a wide range of data more and more resolved. One way to exploit a profusion of such data is to make use of entirely automatic methods. But the automatization of extraction techniques of geographical features comes up against a major difficulty, viz. the validation of the results. This difficulty is mainly linked to the lack of reference data. Besides it is difficult if not impossible to validate an exploitation algorithm from a small number of tests, for then one is validating the result of the final product, not the exploitation method. In fact each experiment is characterized by a particular landscape, by a particular sensor and by particular conditions of image acquisition. With a view to overcome this limitations, an evaluation method is proposed, based on landscape modelling and on geometric and radiometric sensor modelling. This method uses a parametric simulator whose input (landscape model) constitutes a `ground truth', thus allowing a quantitative assessment of the results. This method offers two main advantages: first it allows to generate and analyze a large variety of images on different landscapes and with various sensor modelling conditions, in order to draw wider conclusions. In consequence the lack of reference data is overcome. The second advantage lies in the fact that the simulation approach permits a quantitative parametric validation between the reference data and the extracted data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.