PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Recently the protection of digital information has received significant attention and lots of techniques have been proposed. Digital watermarking is an efficacious technique to protect the copyright and ownership of digital information. Since 90' various implementation approaches about digital watermarking have been presented. In this paper, an adaptive blind watermarking algorithm for still images is proposed based on discrete wavelet transform. The wavelet theory mainly used in the multi-resolution analysis and recently has been applied to watermarking technique. In order to be more robust, the embedding strength is decided based on the background luminance and the texture mask characters of HVS, which is adaptive to the carrier image. The experimental results show that the watermarked image has a good quality of image, watermark is imperceptibility, the algorithm is robust against some image processing such as median filtering, JPEG lossy compression, additive Gaussian noise and cropping attacks and so on. Hence our method can be used to protect effectively property right of digital images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel method for automatic image registration. It represents image as triangular mesh and use triangle as feature primitive. First, it detects corner features and triangulates them into triangular mesh. Then, orrespondences of triangles from different images are established by evaluating the similarity of the triangular regions. Affine rectification is applied to establish pixel correspondences. Based on the triangle correspondences, the image transformation is estimated using RANSAC estimator. The proposed method is applied to various image pairs related by projective transformation, experimental results show that the method works successfully even under the case that there are large rotation or severe perspective deformation effect between the images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an efficient algorithm for color image segmentation based on a multiresolution application of a wavelet transform and watershed segmentation algorithm. The procedure toward complete segmentation consists of four steps: pyramid representation, image segmentation, region projection and region merging. First, pyramid representation creates multiresolution images using a wavelet transform. Second, image segmentation segments the lowest-resolution image of the pyramid using a watershed segmentation algorithm. Third, the segmented low-resolution image with label is projected into a full-resolution image (original image) by inverse wavelet transform. Finally, region merging merges the segmented regions using fuzzy similarity. Experimental results of the presented method can be applied to the segmentation of noise or degraded images as well as reduce over-segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of mesoscale convective clouds is an important issue in weather analysis. As a consequence, several methods have been introduced for cloud tracking on satellite images. Two kinds of methods are commonly used, based on shape matching and motion tracking. In this paper, a new motion tracking method based on a snake model is introduced. Snakes are known to be more efficient than level sets however they do not handle topological changes. Therefore, the hereafter method considers the cloud tracking problem in two parts. First, the snake model used for motion tracking is presented. Techniques are given to ensure the robustness of the method for the tracking of high deformable objects. Secondly, the problem of topological transformation is addressed. Split and merging are characterised and performed by applying geometrical criteria and methods. The method is applied to real cases data and some results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional methods to detect changes between temporal images are subject to the effect of illumination variance and registration noise. The method proposed in this paper uses the edge structure information in image to detect changes. A new conception based on biological vision principle, named Edge Token, is introduced to describe the edge structure, which is extracted by using a set of Gabor functions on the intensity map of gradient image. Correlation is used to compare the similarity of two Edge Token vectors. In order to reduce the false alarm, a suppression factor is taken to reduce the effect of weak edges. According to the result of the correlation process, decision rule can be made to locate the outline of changed area. The Edge Token based change detection is robust to illumination variance and registration noise. Experiments on simulated data and remote sensing images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes decision-level fusion system (DLFS) of multisensor images. Two kinds of objects of the source images are obtained by SVM classifying the feature vector (V), object-oriented correlation (OOM) deriving from which supervises the fusion process. Experimental results using real data show that the proposed algorithm works well in multisensor image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The feature contrast model (FCM), which is the simplest form of the matching function in Tversky's set-theoretic similarity, is a famous similarity model in psychological society. Although FCM can be employed to explain the similarity with both semantic and perceptual features, it is very difficult for FCM to measure natural image similarity with semantic features because of the requirement that all features must be binary and the complex mechanism that semantic features are transformed into binary features. The fuzzy feature contrast model (FFCM) is an extension of FCM, which replaces the complex feature representation mechanism with a proper fuzzy membership function. By this fuzzy logic, visual features, in the FFCM, can be represented as multidimensional points instead of expansible feature set and used to measure visual similarity between two images. Based on the analysis of the distinction between two feature structures (i.e., the expansible feature set and multidimensional vector), we propose a ratio model, which expresses similarity between two images as a ratio of the measures of semantic features set to that of multidimensional visual features. Experiments results, over real-world image collections, show that our model addresses the distinction between semantic and visual feature structures to some extension. In particular, our model is suit for the case that semantic features are implicitly obtained from interaction with users and the visual features are transparent for users, for example, the relevance feedback in interactive image retrieval.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a performance comparison of a variety of data preprocessing algorithms in remote sensing image classification is presented. These selected algorithms are principal component analysis (PCA) and three different independent component analyses, ICA (Fast-ICA (Aapo Hyvarinen, 1999), Kernel-ICA (KCCA and KGV (Bach & Jordan, 2002), EFFICA (Aiyou Chen & Peter Bickel, 2003). These algorithms were applied to a remote sensing imagery (1600×1197), obtained from Shunyi, Beijing. For classification, a MLC method is used for the raw and preprocessed data. The results show that classification with the preprocessed data have more confident results than that with raw data and among the preprocessing algorithms, ICA algorithms improve on PCA and EFFICA performs better than the others. The convergence of these ICA algorithms (for data points more than a million) are also studied, the result shows EFFICA converges much faster than the others. Furthermore, because EFFICA is a one-step maximum likelihood estimate (MLE) which reaches asymptotic Fisher efficiency (EFFICA), it computers quite small so that its demand of memory come down greatly, which settled the "out of memory" problem occurred in the other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In signal processing, great interest has been widely focused on the sparsest represent. Variable selection is a principle for decomposing a signal into "optimal" superposition bases, where optimal means having small value under some criterion among all such decomposition. Basis pursuit is a principle for decomposing a signal into "optimal" superposition of dictionary elements, where optimal means having the smallest l1 norm of coefficients instead of l0 among all such decomposition. In this paper we present a relation between the variable selection and basis pursuit. After the most widely used Cp criterion is further discussed, variable selection is extended to the overcomplete dictionaries case. Based on the equivalence conditions of l1 norm and l0 in signal decomposing, the relationship between variable selection method and the basis pursuit is discussed. Finally, the example of spectrum estimation is given to demonstrate the equivalence of these two methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper star identification has been formulated as a pattern recognition problem. Star patterns, as seen from spacecrafts, are represented as feature descriptor containing directional cosines of the guide-star, star indices and singular value for each star group. Eigenvalue based Boltzman entropy has been used to select the adjacent stars of the guide star to specify a star-pattern. The training patterns are kept in a reference catalog and used for comparing and classifying query patterns (star-groups). The effectiveness of eigenvalues (or singular values) as features has been shown with distribution histogram. Identification performance was tested with different level pixel error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To detect and recognize multi-targets in the sequence image tracking system a new method is proposed. Moving targets were separated from the background by difference of multi-frame. According to the dynamic threshold table the image was made binary, then invariant characteristics were extracted from every target. These characteristics were used for recognition of targets by BP neural network. The result of computer simulation shows that the method is practical and has high speed in detection and recognition for moving multi-targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The deviation analysis of space-time structure is based on GIS overlay. At present, the method in common use to describe deviation of expansion in spatial structure makes use of comparative analysis to study difference of various classification of land-use in spatial position. Although this method can draw the outline of spatial structure characteristic of land-use with object and brief advantages, its changed speed is not comparative in the strict sense because divided spatial units are not equal land areas. Thus the method above is improved in the paper by creatively importing the changing intensity index by average of year that the comparative new index can describe deviation of land-use spatial-temporal structure. Changing intensity index means that land-use changed area of a certain spatial unit is percentage of overall land area in different a certain period of research. In order to compare intensity or trend of urban land-use change in different period of research, changing intensity index in each spatial unit by average of year that has been calculated is a standard processing course is for its changing speed by average of year in land area of each spatial unit. The changing intensity index is comparative. Thus we can make a thorough research for land-use classification and obtain deviation situations of spatial-temporal structure for different land classification. The result will benefit the planning management of urban land-use of developed districts in China in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the development of the Internet and communication technology, video coding has been more and more important. When the rate of video transmission is high, the correlation between adjacent video frames is high, too. The cost of coding the difference of the frames is litter than that of coding directly video frames. So, when video streams are coding, motion estimation is usually used to reduce the correlation between video streams in temporal axes. Therefore, motion estimation plays an important role in video coding. The present Diamond Search is accepted as one of the most efficient quick search. In this paper, a new motion estimation based on analysis of Diamond Search is proposed, in which video frames fall into two categories: the violent-motion frames and the moderate-motion frames. Based on the new motion estimation method, a quick hierarchical diamond search algorithm is proposed for the majority of moderate-motion frames. The experimental results have showed that the proposed algorithm is much faster than Diamond Search and obtains the same image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As remote sensing image resolution improves continuously, the acquisition and up-date of GIS spacial data with the auxiliary of remote sensing image is becoming hot topics in the area of RS mixed with GIS. This paper discusses a method of GIS database up-date based on the fusion of spot and TM image features. We can overlay the image fusion outcome with the registrated GIS vector map to update GIS vector data, thus it can be realized that the quick up-date of GIS data by the use of high-resolution remote sensing image information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the variety of approaches proposed in literature, we can clearly distinguish the Wiener filter and the wavelet transform based ones for their effectiveness and, in many cases, simplicity. By exploiting the characteristics of both wavelet thresholding denoising and spatial Wiener filtering, the paper presents a combined scheme for the noise removal in images. We first perform thresholding denoising in wavelet domain to obtain a pre-denoised image, then spatial adaptive Wiener filter, i.e. Lee filtering, is used to increase the quality of the image restored. The crux of our method lies in the simple yet effective estimation of the optimal noise variance for Lee filter. By numerical computation, this optimal noise variance of Lee filter is presented which can nearly minimize the mean square error (MSE) of the pre-denoised image. Experiment results show that mean square error and signal-to-noise ratio (SNR) of our combined denoising approach have been improved, compared with the denoising solely in wavelet or spatial domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Utilizing linear feature is now widely used in building detection. These linear feature-based methods are simple but low accuracy and time-consuming. This paper proposes a novel and efficient method of automatically detecting buildings based on multi-characteristic fusion from remote sensing images. The method firstly adopts Canny algorithm to detect edges lines from images. Then utilizing the feature of building distribution and the Hough transform, it employs ISODATA clustering algorithms to detect the main orientations of buildings. This clustering analysis method could filter edge lines and help to get latent edges of building objects. After that, the edges were linked to get the buildings' shape according to some linking rules. However there exit large amounts of false detection objects. In order to reduce them, a series of geometrical characteristics (such as the corner characteristic, the shadow characteristic, etc) and gray characteristic of buildings as criteria were brought up as the building judgments to eliminate them. We put forward the corresponding algorithm to extract each characteristic, later the fusion method based on the maximum membership principle in fuzzy pattern recognition was introduced to combine all these algorithm results together, and at last successfully detect buildings. The large number of experiment results show that this new method in this paper, compared with common linear feature-based building detection methods, is of high speed, more accurate and has good robustness. This new method is especially fit for practical applications in relatively complicated environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An improved second generation digital image watermarking scheme is proposed. This scheme exploits the region feature instead of point or line feature. The region features are retrieved by watershed transform, which allows watermark recovery after common attacks. The experiments have shown that this proposed scheme is robust against compression, noise intrinsically and more robust against geometrical attacks and JPEG compression compared with Kutter's method. The watermark capacity is improved because the robustness of region feature is more than point or line feature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of image fusion technologies, it is necessary to look for objective evaluation methods of image fusion, which may offer guidance for computer to select automatically appropriate fusion algorithms under different scenes. This paper consists of three part: First, current evaluation methods of image fusion are classified as four kinds, such as methods based on statistical characteristic, methods based on definition, methods based on information theory, and methods based on important feature factors. At the same time, all methods are analyzed and compared in the case of validity, redundancy, and consistency of algorithms when evaluate fusion effects. Second, on the basis of analysis and compare of evaluation methods above, some evaluation parameters are picked out, which have advantage over others on objectivity and validity, and reflect different evaluation performances for fusion algorithms. Therefore, based on voting ideas, a method for overall evaluation of fusion algorithms is proposed by using evaluation parameters selected above. Finally, overall evaluation method proposed is applied to evaluate fusion results of different remote sensing images and different fusion algorithms. Simulation results show that the method proposed in this paper is accurate, objective and very effective for evaluating performance of image fusion, and evaluation results are in accordance with subjective evaluation results by direct look.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human face localization in an image is a key component in many intelligent applications. A compact face detecting method, especially suitable for multiple faces in a complex background, is proposed in this paper based on the symmetry property of the face and facial organs. The detector has a low computation load, a simultaneous localizing capability and is robust to bad illumination. Experiments show the effectiveness of the algorithm so it is potentially useful in related intelligent applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital watermarking technique provides an effective way to protect the image. A detecting scheme for digital image watermarking, which using principal components of the watermarked image, was proposed in this paper. The scheme was based on DWT and DCT, and using principal components of the watermarked image at the same time, As we know that the principal components of an image has the virtue of insensitivity to geometrical transform, filtering and rotating operation, so using this character for watermark detecting should have the same merit. The course of the scheme is as follow: first, the original image was decomposed by 2-D discrete wavelet transform and the detail sub-bands are reserved, then the approximation image is transformed by discrete cosine transform and embedded with watermark image. At the stage of watermark detection, The DWT and DCT were carried out first, at the same domain we extract the embedded watermark, which was corrupted by attacks. Then applied principal components transform to the corrupted image to restore the watermark. After processing, the restored watermark had least mean square error with the original watermark image. The experimental results showed its good unification between the robustness and insensitivity, and the quality of detected watermark is improved greatly at the same time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
IR and visible sensors are very common sensors adopted in the region of military image fusion, however, since there are less correlation and lack of consistent features between their acquired images, it is very difficult to achieve automatic registration of IR and visible images. In this paper, optoelectronic imaging anti-ship missile is taken as research object, and based on the analysis of its seeker's imaging process, we proposed a new automatic registration algorithm based on sensor parameters and image information. The basic idea of our algorithm is that decomposing the transform model, and simplifying it step by step. For example, the transform of IR and visible image registration is affine. By adjusting sensor parameters, the affine transform can be simplified to rigid transform through eliminating the scaling change between images, and by finding out the centroid of ship target's contour we can further eliminate the translational change between them. After image registration is achieved, the registration effect is assessed by judging whether the sea-sky-lines of the two registered images are in the same position. The final simulation experiments convince us that our algorithm has better performance on solving the difficult registration problem of small target images with different sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents an algorithm to find homonymous line segments from images based on the depth face of building. TIN is formed from the object points calculated by homonymous points in stereopair, and contour lines are inserted from TIN with a given interval. With image and contour lines, the different depth faces of building can be partitioned approximately. Homonymous line segments are found in different depth faces by matching along epipolar line, the method is tested feasible. It solves the problem of line matching in the changed area of building depth face simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper relates to the newly developed empirical mode decomposition (EMD) and an effective algorithm for removing noise in sonar images utilizing EMD method is also presented. The EMD approach, with the basis of decomposition derived from the data, proved to be intuitive, direct and adaptive. As a result, EMD is quite suitable for analyzing nonlinear and non-stationary data. Sonar images can be decomposed into a series of modes whose characteristic space scale defined by the space lapse between extrema is different. The noise removal of sonar images is implemented by smearing the modes blurred by noise in the spatial domain. Different from previous works, the sifting process of EMD is realized using h-extrema transform to detect regional extrema and thanks to radial basis function for surface interpolation. Application to sonar images has shown that the performance of the algorithm is satisfactory in both noise removal and edge preservation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system to detect and track multiple moving objects in the presence of mutual occlusion and shadow. A novel change detection algorithm based on Cauchy distribution is proposed. The ratio of pixel's intensities between two images is used as the feature to model and subtract background. The distribution of the ratio of background pixel's intensities between a current image and a reference image can obeys Cauchy distribution, assumed that some observed temporal intensity variation of each pixel in a background image is caused by white noise. By hypothesis testing whose decision thresholds are related to the false alarm rate, robust change detection can be carried out. We exploit spectral and geometrical properties of shadows to recognize and eliminate them in video sequences. Intensity, hue and saturation in the YCbCr color space is employed to this end. In order to solve ambiguity due to occlusion and recover from intermittent tracking failure, we propose a method to implement tracking of multiple moving objects. The method is based on multi-cue and dynamic templates matching in consecutive frames and motion estimation by Kalman filter. In our system, a fast accurate clustering algorithm based on k-nearest neighbor search is employed and the feature space is constructed by extracting the position, color, shape and velocity information of moving objects. In this paper, occlusions are addressed in two classes, i.e. static occlusion and dynamic occlusion. Depend on the prior knowledge of the background scene and the feedback from objects detection and tracking, the distribution of static occlusion region in the scene can be acquired and updated. The bounding box around a static occlusion region is used as an alarm sign to start the process of static occlusion. Dynamic occlusion event can be detected and processed in terms of the proposed tracking scheme and multi-cue and dynamic templates matching approach. Experiment results demonstrate that the proposed approach is feasible and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a type of real-time image-processing system based on multi- processor (TI 320C6400). Modularization is adopted in this system. The system is multi-bus architecture. In each basic module communication is implemented via local bus. Between modules communication is implemented via so-called "Links" and multi-channel buffered serial ports (McBSP). The system is flexible and scalable and can be programmed as pipelined or SIMD or MIMD architecture to meet the variability of the parallel algorithm of image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Content-based image retrieval (CBIR) which provide an effective and advanced means to manage and utilize image database, is one of the most active research topic of image comprehension, image database and computer vision. CBIR system is one of the most important offers of services and applications to Spatial Data Infrastructure (SDI), and SDI will rely on CBIR more and more. Nevertheless, at present, the main research result of CBIR concentrates on small-sized simple image's retrieval, such as fingerprints' and trademarks' retrieval. Limited by remote sensing images' property, such as various dimension, huge data size and plenty of information, research about remote sensing image CBIR rarely reported and there isn't a mature remote sensing image CBIR system now. This paper analyzes the challenges and difficulties that remote sensing image CBIR system facing, including the technique of remote sensing image's feature extraction and representation, the organizing and management of remote sensing image data in CBIR system, the utilizing of remote sensing image's topological relationship in CBIR system, high-dimensional vector indexing technique and self-learning method. Then a three-layer architecture is constructed for remote sensing image CBIR system. Finally, we predict the tendency and trend of the remote sensing image CBIR, including the feature extraction and representation method will rely more and more on image's semantic information, the research of compressed image's retrieval will be more extractive, and unified model of content-based remote sensing image retrieval system will be constructed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With lower speed of geographical data upgrades than the corresponding demands for them, geographical data update becomes the bottleneck that geographical Information Systems (GIS) face at present. Remote sensing images are the most available data source of geographical data upgrade. In this paper, a strategy of selecting the maximum variance between clusters to detect the changed area is put forward. Based on the changed area, the geographical data can be updated using remote sensing images. The validity of the strategy is proved at the last of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new approach to edge detection on synthetic aperture radar (SAR) images based on contourlet-domain hidden Markov tree (CD-HMT) model. In the contourlet transform, a double filterbank structure, pyramidal directional filterbank, is employed by first using Laplacian pyramidal decomposition and then a local directional filterbank. Compared with the wavelet transform, the contourlet transform not only can capture multiresolution and local information of an image, but obtain its directional information in a flexible way by using different number of directions at different scales. This non-separable two-dimensional transform is a new alternative to and improvement on separable wavelets for the representation of an image. On the other hand, HMT is a tree-structured probabilistic graph that can capture the statistical properties of contourlet coefficients at different scales and directions where each coefficient is considered as an observation of its hidden state variable which indicates whether the coefficient belongs to singularity structures or not. Herein, the state "1" represents the location belonging to singularity structure, and state "0" not. CD-HMT model is firstly trained by Expectation-Maximization (EM) algorithm before the Viterbi algorithm is utilized to uncover the hidden state sequences based on maximum a posterior (MAP) estimation. Moreover, we take into account the effect of speckle on the detection performance for singularity structures. Finally, the thinning post-processing procedure is performed to obtain the edge map of an SAR image. Experiments on both simulated speckled and real SAR images demonstrate the feasibility and effectiveness of our approach with the performance outperforming the classical Canny edge detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion refers to the techniques that integrate complementary information from multiple image sensor data such that the new images are more suitable for the purpose of human visual perception and the compute-processing tasks. In this paper, a new image fusion algorithm based on multiple wavelet, namely multiwavelet, transform to fuse multispectral images is presented. Multiwavelets are extensions from scalar wavelet, and have several unique advantages in comparison with scalar wavelets, so that multiwavelet is employed to decompose and reconstruct images in this algorithm. In this paper, the image fusion is performed at the pixel level, other types of image fusion schemes, such as feature or decision fusion, are not considered. In this fusion algorithm, a feature-based fusion rule is used to combine original subimages and to form a pyramid for the fused image. When images are merged in multiwavelet space, different frequency ranges are processed differently. It can merge information from original images adequately and improve abilities of information analysis and feature extraction. The experiment on the fusion of registered multiband SPOT multispectral Panchromatic band \XS3 band images is presented in this paper. The experiment results show that this fusion algorithm, based on multiwavelet transform, is an effective approach in image fusion area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in remote-sensing technology suggest that satellite-based earth observation (EO) has great potential for providing and updating spatial information in a timely and cost-effective manner. However, with the improvement of the spatial resolution of satellite image, the detail of the image has become more complicated. Even though texture features included for multi-spectral high-resolution satellite imagery, conventional methods for pixel-based classification have limited success. In order to take better advantage of spatial information of high-resolution satellite imagery, a combined segmentation and pixel-based classification approach is presented in this paper. Firstly, pixel-based multi-spectral maximum-likelihood classification approach obtains initial classification result. Secondly, image segmentation is created by watershed transform and region merging. Finally, based on the proportions of each class present in each segment obtain final classification map. A QuickBird imagery of the suburban area of Shanghai in China is used to validate the proposed method. Experiment proves that classification map produced by the combined approach, is visual noise-free, has clean borders, and has better classification accuracy than that by pixel-based classification approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an algorithm, which tracks a deformable object in complex scene based on Bayesian estimation in Particle filter framework. In Particle filter framework, both dynamic model and measure model of Particle filter, which utilizes information of structure of target edges and gray level distribution of neighbors of target edges, are respectively constructed in term of interframe correlation in the context of object tracking. The fuzzy metric is constructed to measure the similarity between histograms of template and candidate sub-regions. The tracking window can be adaptively changed with the variation of object appearance. The strategy for template update is applied according to confidence level threshold. Both judgement of occlusion and solution to occlusion are given in term of threshold and temporal window. Those experimental results illustrate that this algorithm can stably track deformable target under complex background at the low computing cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
2-D electrophoresis gel images can be used for identifying and characterizing many forms of a particular protein encoded by a single gene. Conventional approaches to gel analysis require the three steps: (1) Spot detection on each gel; (2) Spot matching between gels; and (3) Spot quantification and comparison. Many researchers and developers attempt to automate all steps as much as possible, but errors in the detection and matching stages are common. In order to carry out gel image analysis, one first needs to accurately detect and measure the protein spots in a gel image. As other image analysis or computer vision areas, image segmentation is still a hard problem. This paper presents algorithms for automatically delineating gel spots. Two types of segmentation algorithms were implemented, the one is edge (discontinuity) based type, and the other is region based type. For the different classes of gel images, the two types of algorithms were tested; the advantages and disadvantages were discussed. Based on the testing and analysis results, authors suggested using a fusion of edge information and region information for gel image segmentation is a good complementary. The primary integration of the two types of image segmentation algorithms have been tested too, the result clearly show that the integrated algorithm can automatically delineate gel not only on a simple image and also on a complex image, and it is much better than that either only edge based algorithm or only region based algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a novel image corner detection method. We use a new feature "flatness" to help find corner at the base of SUSAN method. Our experiments show our method has good performance and it can reject some false corners reported by SUSAN method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ant colonies, and more generally social insect societies, are distributed systems that show a highly structured social organization in spite of the simplicity of their individuals. As a result of this swarm intelligence, ant colonies can accomplish complex tasks that far exceed the individual capacities of a single ant. As is well known that aerial image texture classification is a long-term difficult problem, which hasn't been fully solved. This paper presents an ant colony optimization methodology for image texture classification, which assigns N images into K type of clusters as clustering is viewed as a combinatorial optimization problem in the article. The algorithm has been tested on some real images and performance of this algorithm is superior to k-means algorithm. Computational simulations reveal very encouraging results in terms of the quality of solution found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the recent explosion of interest in microarray technology, large amounts of microarray images are being produced currently. Since there is no standard method for information extracting, the storage and the transmission of this type of data are becoming increasingly challenging. Here we present a new segmentation template extracted method and propose a new lossless compression scheme. Our segmentation scheme is based on mean shift filtering and morphological H-reconstruction that can accurately segment microarray images. Based on the extracted segmentation template, our compression scheme divides image into foreground regions and background region and code each region separately. Particularly, two 16-bit images sharing one segmentation template and the segmentation template are compressed into one file. Experimental results and comparison with Gzip that commonly used in microarray management showed that our scheme is efficient and also can greatly facilitate the downstream information extraction and analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of image compression in remote sensing applications. Compared with other still images, remote-sensing images are characterized with complex textures and weak local correlation. By using wavelet transform, the coefficients have showed a spatial clustering trend in wavelet domain. Most of current algorithms of image compression have not taken this clustering into account. In order to further improve coding efficiency, an efficient remote sensing image coding algorithm based on morphological wavelet is proposed. First, the fast multi-scale wavelet transform is applied to image; second, a morphological operator is designed to capture the clusters and fully exploit the redundancy between the coefficients. Compression is then achieved by using this non-linear method. For multi-bands remote-sensing images, a Prior Important Band (PIB) method is used to decorrelate the correlations in the spectral dimension, the above coding algorithm is then applied to the bands. In the experiment, the author selects one AVARIS hyper-spectral image and two satellite images to test the performance of the algorithm. Experimental results illustrate that it provides higher performance than JPEG2000 in low-bits compression and it is suitable to multi-band images too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, video-based Intelligent Transportation Systems (ITS) have been of major importance for enforcing traffic management policies. Detection and tracking of moving vehicle is at the core of many applications dealing with traffic image sequences. For an accurate scene analysis in monocular image sequences, a robust segmentation of moving object from the static background is generally required. However, one of the main challenges in these applications is moving cast shadows, which often interfere with fundamental tasks such as object extraction and description. For this reason, shadow segmentation is an important step in image analysis.
We propose a real-time and effective method for detecting vehicles from a sequence of traffic images taken by a single roadside mounted camera. The proposed algorithm includes three stages: first, extract moving object region and background region from the current input image, second, by adopting the various characteristics of shadow in luminance, chrominance, and gradient density, segment moving cast shadow region which is often caused by moving vehicle and, at last, Sobel edge detector is employed to detect edge pixels of the moving cast shadow in order to suppress all shadow pixels in the detected region.
The proposed method has been tested on a number of typical monocular traffic-image sequences and the experimental results on the real-world videos show that the algorithm can effectively detect the associated moving cast shadow from the interested object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to provide more efficient content-based functionalities for video applications such as content-based scalable coding, content-based indexing and retrieval, it is necessary to extract meaningful objects from scenes to enable object based representation of video content. This paper proposes an algorithm that uses Markov random field models for motion field to extract meaningful objects from video sequences, these models characterize motion of moving objects in terms of spatial interaction between motion vectors within the motion field. The proposed algorithm employs a splitting and merging procedure, in the splitting phase video frame is divided into a number of uniform regions with respect to spatial features; to detect moving objects, adjacent segmented regions are grouped together according to the motion information during the merging process, which is directed by the conditional pseudolikelihood of the motion field. The performance of the algorithm is evaluated on real world video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an image contrast enhancement algorithm using multi-scale edge representation of images. It has long been known that the Human Vision System (HVS) heavily depends on edges in the understanding and perception of scenes. Contrasts in grayscale images are measured between the differences of pixels on both sides of edges, which is defined as the gradient magnitudes of those edges. And multi-scale edge of an image is characterized by the local extrema of wavelet coefficients across levels. So rebuilding an image from properly stretched the extrema is a promising way to enhance the contrast of the image. We tackle this reconstruction problem with a straightforward interpolation method instead of the commonly used iterative projection process. Extensive experiments justify our algorithm an efficient and effective contrast enhancement method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The challenge for Remote Sensing images for GIS and other applications is the size of the image. Currently, it is common to have images that are greater than 10,000 by 10,000 pixels, multiple bands, and greater than 8 bits per pixel per band. The compression techniques have become popular because of greater efficiency of storing and accessing large images. It is valuable to develop image processing techniques for various application of remote sensing images in the compressed domain. Among the compression techniques, discrete-wavelet-transform based techniques have become popular because of their excellent energy compaction and multi-resolution capability. As a result, the newly JPEG2000 image compression standard is established based on it. Greater bit depths, tiles, resolution progression, quality progression, and fast access to spatial locations all contribute to the capability and functionality of JPEG2000, which make it an ideal technology for the remote sensing and GIS applications. And the compressed domain image processing mechanism was offered in JPEG2000 frame. The Manipulation approach of JPEG2000 compressed large Remote Sensing images was discussed in this paper. The large Remote Sensing image display, geometry transform, and cropping were demonstrated. Experimental results show that the PEG2000 techniques provide good performance in compressed remote sensing image manipulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A self-synchronization blind image watermarking technique based on wavelet transform is proposed in this paper. Synchronization is a serious problem to any watermarking schemes while many existed watermarking did not mention it. Image manipulations such as geometric distortions, even by slight amount, can cause the self-synchronization between watermark embedding and detection process so to make the detector disable. So for any watermark detector, synchronization is the precondition of correct detection. In this approach, a new way to estimate the asynchronous distortion parameters by using the one or two characteristics of the host image is proposed to make the re-synchronization of watermarking technique. The characteristics can be used as private key of detector to enhance the safety of watermark. Independent Component Analyze is adopted by detector so that the detector can extract not merely detect the watermarks blindly without using any information about the host image, watermark and any other embedding and attack information. The time tag is also used in watermark to resolve the problem of the multi-embedded watermark deadlock. That is, the detector can extract all embedded watermarks and determine who embeds his watermark first. Experimental results demonstrated that the proposed watermarking technique is robust against watermark attacks produced by Stirmark-the popular watermark test software, such as JPEG compression, scaling, translation, rotation, shearing, filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a fast and robust algorithm for classification and recognition of ships based on the Principal Component Analysis (PCA) method. The three-dimensional ship models are achieved by modeling software of MultiGen, and then they are projected by Vega simulating software for two-dimensional ship silhouettes. The PCA method as against the Back-Propagation (BP) neural network method for simulated ship recognition using training and testing experiments, we can see that there is a sharp contrast between them. Some recognition results from simulated data are presented, the correct recognition rate of PCA method improved rapidly for each of the five ship types than that of neural network method, the number of times a ship type is recognized as one of the other ships is reduced greatly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on statistical learning theory, support vector machine (SVM) is a novel type of learning machine, and it contains polynomial, neural network and radial basis function (RBF) as special cases. The mapped least squares support vector machine (MLS-SVM) is a special least square SVM (LS-SVM), which extends the application of the SVM to the image processing. Based on the MLS-SVM, a family of filters for the approximation of partial derivatives of the digital image surface is designed. Prior information (e.g., local dominant orientation) are incorporated in a two dimension weighted function. The weighted MLS-SVM with the radial basis function kernel is applied to design the proposed filters. Exemplary application of the proposed filters to fingerprint image segmentation is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noises are inevitable in Hyperspectral Remote Sensing (HRS) image, it is very important to design effective filter to reduce the impacts of noises and enhance image quality and information content. Based on the characteristics of HRS image, three filtering strategies, including image dimension filtering, spectral dimension filtering and three-dimensional filtering, are proposed in this paper. The principle of image dimension filtering is similar to traditional image filtering from spatial and frequency domain. The image of each band is viewed as an independent set and filtering operation is used to it. Some filters, including mean filter, medium filter and frequency filter, are used to reduce noises in every band. The key idea of spectral dimension filtering is to take every pixel as the processing target, and the gray value (or albedo) of the pixel on all bands will form a spectral vector. Filter is used to the spectral vector of every pixel, and mean filter with different scales is tested in this paper. Three-dimension filtering is different from the former two methods by its spatial and spectral dimension processing simultaneously. It views HRS image as a large data cube with row, column and layer (band), so filter is based on data cube. In this paper the 3×3×3 cube is used as filtering template, and that means those neighbors of adjacent bands of a pixel on a given band will be used to filter, so both spatial and spectral information is considered in this new method. Finally, some examples are experimented and quality assessment of sole band, similarity measure to some pixels and other statistical indexes are used to assess the performance, and then related conclusions and suggestions are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mathematical morphology is shape sensitive and can be used in edge detection because it deals directly with the
geometrical properties of objects. The usual morphology operates on all pixels of an image, it may waste computation
time as well as degrade the resulting precision. To solve these problems, in this paper, we modify the traditional gray
morphology definitions, and propose an edge detection algorithm for gray image, and in a general way suitable for
multispectral image based on the theory of multivalued morphology, which only operates on the parts of interest in an
image and reacts to certain characteristics of the region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to speedup image processing in embedded environment, we construct a parallel processing system of images. The parallel processing system includes a parallel hardware, a new defined language for parallel image processing and a set of software tools. From the view of high performance, low power dissipation and the characteristics of image processing, we construct a SIMD coprocessor as an image processing accelerator. Using a RISC host processor manages the whole system. The SIMD processor is scalable. We also define a new language which is extended from standard C. A new data type "stream" and a new keyword "kernel" are added to the language to explicitly describe parallelism. For the new hardware and language of parallel image processing, we also research software tools for this parallel system. The software tools maps programs to code that runs on the new hardware. For example develop a scheduler to transfer stream data between memories and register files. From the analysis we found that the parallel image processing system can not only match image applications' characteristics but also easy to implement by using VLSI technology. With new language and software tools supporting, an embedded real-time parallel image processing system becomes available for programmers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unsatisfactory result of traditional pixel-based classification methods in classifying high resolution remotely sensed imagery may be improved by employing image segmentation. Based on a brief review of image segmentation, this paper introduces an image segmentation method--FNEA--which is used in eCognition, the first commercial object-oriented image processing software in the world, for automatic object extraction from high-resolution satellite images and automatic updating of GIS databases. From the point of information extraction, the author analyzes the advantages and disadvantages of the algorithm by using several examples and put forward possible improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Techniques of converting 2D plans or survey data to CAD models (model-based modeling) are very labor intensive, and methods for rendering such models are generally not photo-realistic. The photogrammetric modeling technique (image-based modeling) is an interactive tool that allows the user to build a geometric model of an object based on a set of photographs. This paper adopts an approach of combination of model-based modeling and image-based modeling, make the modeling process more effective and accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The change detection of land use and land cover has always been the focus of remotely sensed study and application. Based on techniques of image fusion, a new approach of detecting vegetation change according to vector of brightness index (BI) and perpendicular vegetation index (PVI) extracted from multi-temporal remotely sensed imagery is proposed. The procedure is introduced. Firstly, the Landsat eTM+ imagery is geometrically corrected and registered. Secondly, band 2,3,4 and panchromatic images of Landsat eTM+ are fused by a trous wavelet fusion, and bands 1,2,3 of SPOT are registered to the fused images. Thirdly, brightness index and perpendicular vegetation index are respectively extracted from SPOT images and fused images. Finally, change vectors are obtained and used to detect vegetation change. The testing results show that the approach of detecting vegetation change is very efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, the splitting and merging algorithm of image segmentation is based on quad tree data structure, which is not convenient to express the topography of regions, the line segments and other information. A new framework is discussed in this paper. It is "TIN based image segmentation and grouping", in which edge information and region information are integrated directly. Firstly, the constrained triangle mesh is constructed with edge segments extracted by EDISON or other algorithm. And then, region growing based on triangles is processed to generate a coarse segmentation. At last, the regions are combined further with perceptual organization rule.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the present work is to assess the performance of three-dimensional Double Directional Filtering (TDDDF) algorithm for detecting and tracking a weak moving dim target against a complex cluttered background in infrared image sequences. This paper proposes an novel TDDDF to improve the integrated signal-to-clutter ratio (ISCR) and enhance the three-dimensional directional filter's (TDDF) target energy accumulation ability further. Since the TDDDF do well to whitening noise (or quasi whitening noise) but not so sensitive to complex cloudscene background, prior to the filtering, a newly pre-whitening method termed Spatial-Temporal Adaptive Filtering algorithm is used here to suppress clutter background. Extensive experiment results demonstrate the proposed algorithm's ability in detecting weak dim point target against cloud-cluttered background. Finally, performance comparisons of the proposed algorithm and TDDF, on real IR image data, are presented in which the advantages of the proposed TDDDF filters are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fractal describes the self-similar phenomenon of signal and self-similarity is the most important character of fractal. Pentland provides an excellent explanation of the ruggedness of natural surface. Fractal-based description of image texture has been used effectively in characterization and segmentation of natural scene. A real surface is self-similar over some range of scales, rather than over all scales. That imply self-similarity of a terrain surface is not always so perfect that keep invariable in whole scale space. To describe such self-similarity distribution, a self-similarity curve could be plotted and was divided into several linear regions. We present a new parameter called Self-similarity Degree (SD) in the similitude of information entropy to denote such self-similarity distribution. In addition, one general characterization of self-similarities is result of physical processes. Terrain surface are created by the interactional inogenic and exogenic processes. Hereby, we introduce self-similarity analysis and multifractal singularity spectrum to describe such complex physical field. By the self-similarity analysis and singularity spectrum, the different self-similar structures and the interaction of processes in terrain surface were depicted. Our studies have shown that self-similarity is a relative notion and natural scenes own abundant self-similar structures. Moreover, noises always destroy the self-similarity of original natural surface and change the singularity distribution of original surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of research on semantic image retrieval is to fill the significant gap between images' low-level vision features and users' high-level semantics. The image semantics' description and extraction are two key issues. We suggest an approach of GIS Semantics-Based Remote Sensing Image Retrieval (GISSBIR), involving spatial object semantic representation, spatial object semantic matching and the extension of spatial relationships. Object-oriented GIS semantic model and concept semantic network are employed to express the semantics of spatial objects. We also design a semantic mediator to handle with the semantic discordance between user and the system, and extend the direction's spatial relationship of Oracle 9i Spatial. By applying Boolean calculation upon the results of GIS atomic retrieval, vector retrieval results can be obtained, based on which we shall be able to find the remote sensing image retrieval's outcome that has the same coordinate frame with GIS data. The effectiveness of this GISSBIR approach is proved by actual experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Objective quality assessment has been widely used in image processing for its convenience. Usually human eyes are the terminal of observing images, so many researchers have been studing the objective image quality evaluation method based on Human Visual System (HVS) for decades. Although many methods have been proposed, most of them are based on error sensitivity and are not better than simple PSNR (MSE). Recently the Structural Similarity (SSIM) based on images' structural information is proposed, in which the philosophy is that the HVS is highly adapted to extract structural information from the viewing field, and simulation results have proved it is better than PSNR MSE). By deeply studing SSIM, we find it fails to measure the blurred images with a lot of flat regions and has some shortcomings in its equation. Based on this we propose an improved objective quality assessment method which is called as Gradient-based Structural Similarity (GSSIM). Experiment results show that GSSIM is more consistent with HVS than SSIM and PSNR (MSE).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tissue segmentation in 3d data is an important technology in medical visualization, image segmentation and virtual endoscopy. It is difficulty to automatically and accurately implement tissue segmentation in 3d data because of its complexity. A semi-automatic tissue segmentation algorithm in 3d data is proposed based on boundary model and local character structure in this paper. We found out inner voexls and outer voexls by pre-appointed voxel based on boundary model. And then, boundary voexls are correctly classified into different tissues by their eigenvalues of Hessian matrix based on the local character structure. Only eigenvalues of the boundary voxels are computed, so little time is used compared with other algorithms based on local character structure. It can quickly and effectively realize the segmentation of single tissue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thresholding is one basic issue in digital image processing. At any level of pixel, property, or entity, the threshold is frequently used in identifying the scope of one image object and the boundary between two different image objects. In this paper, the image threshold is comprehended by identifying its basic characteristics. In theory, image thresholding is self-adaptive spatially, temporally, and spectrally. However, the past contributions, such as histogram-based image thresholding, are mainly made to spectrally or grey-valued adaptive image thresholding, i.e., spatially irrelevant image thresholding. Here an two-step approach to spatially adaptive image thresholding is proposed. First, we make a rough image segmentation with our prior knowledge about the image. Then we make a histogram-like statistics for generating a representative threshold in each one of these segmented image regions. The representative threshold is positioned at the center of that image region. Innovatively, a spatial surface fitting function is given to solve the threshold at any position of the image. The spatial surface fitting function is generated with an orthogonal basis of functions along axes x and y respectively. With the representative thresholds in the initially segmented regions, the parametrics crs of the spatial surface fitting function are estimated according to the criteria of least squared error. As for the result of thresholds, the overall accuracy of image thresholding is evaluated with the mean squared error. Some potential improvements of our approach, including initial image segmentation, initial representative thresholds determination, and higher order basis functions selection, are elaborated for more sound image thresholding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In clinical practice, digital subtraction angiography (DSA) is a powerful technique for the visualization of blood vessels in the human body. Blood vessel segmentation is a main problem for 3D vascular reconstruction. In this paper, we propose a new adaptive thresholding method for the segmentation of DSA images. Each pixel of the DSA images is declared to be a vessel/background point with regard to a threshold and a few local characteristic limits depending on some information contained in the pixel neighborhood window. The size of the neighborhood window is set according to a priori knowledge of the diameter of vessels to make sure that each window contains the background definitely. Some experiments on cerebral DSA images are given, which show that our proposed method yields better results than global thresholding methods and some other local thresholding methods do.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar scene matching technique has been widely found in many application fields such as remote sensing, navigation, terrain-map match, scenery variance analysis and so on. Radar image geometry is quite different from that of optical satellite imagery, whose imaging is a slanting imaging of electromagnetic microwave reflection. The different characters between radar image and optical satellite images are very distinct, such as the layover distortion of ground-truth and speckle noise, which degrades the image to such an extent that the features are very unclear and difficult to be extracted. So the factors such as the hypsography, ground truth, sensor altitude and imaging time should be taken into account for radar image and optical image matching. In this paper, we develop an image match algorithm based on reference map multi-area selection using fuzzy sets. Image matching is generally a procedure that calculates the similarity measurement between sensed image and the corresponding intercepted image in reference map and it searches the maximum position in the correlation map. Our method adopts a converse matching strategy which selects multi-areas in optical reference map using fuzzy sets as model images, then match them on the sensed image respectively by normalized cross correlation matching algorithm and fuse the match results to get the optimum registered position. Multi-areas selection mainly considers two influence factors such as ground-truth texture features and the hypsography (DEM) of imaging region, which will suppress the influence of great variance imaging region. Experiment results show the method is effective in registering performance and reducing the calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A modified method of passive ranging from optical flow (OF) of infrared target was proposed. The range was achieved through analyze of the movement of the imaging sensor and the OF of the target. As the traditional passive ranging algorithm was only using the definition but no OF in the range estimation. The resolution of the imaging system will influence the ranging result strongly. The OF of the potential target image from an adequate length of temporal changing rate. The direct use of OF in range estimation is supported by a modified 3D gradient operator. We present the OF parameters clustering method to achieve more robust result. Firstly, we use median filter to remove the wildpoints and outliers of the image sequences. Secondly, the moving target was segmented with auto-adaptive threshold algorithm. The whole target was obtained by the OF of the pixels inside the smallest rectangle that surrounds the target. A range expression at each pixel may directly calculate from the pixel location, the motion parameter and OF. Finally, the range to the target is the average range of the target area pixels. The experiment result shows that the algorithm can be applied to the infrared target passive ranging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses part of the problem dealing with the automatic detection of hazardous articles in accompanied baggage based on multi-energy X-ray imagery for Station Security. In this detection problem, segmentation is the first significant stage to extract interested objects in the images for detailed analysis and recognition at following stages. Due to complexity of articles in passenger baggage, the X-ray images generally contain regions in which different objects are overlapping. In order to obtain the integrated objects for subsequent analysis and recognition, these regions should be multi-segmented and allocated to different objects simultaneously. In this paper, we propose an ARG-based segmentation method using fuzzy attributed relational subgraph (ARSG) matching based on neighborhood structure ARSG model base (MB). The proposed segmentation strategy consists of two phases: pre-segmentation and post-segmentation. In the pre-segmentation phase, an X-ray image is segmented into non-overlapping segments using multi-threshold and statistical techniques according to color and texture features and represented by an attributed relational graph (ARG). Subsequently, in the post-segmentation phase, we propose a graph-matching algorithm using fuzzy similarity distance (FSD) that represents the similarity of the attributed relation between the vertex neighborhood and a certain model. Finally, the Number of Layer value of the vertices, which describe the number of objects overlapping in correspond region, are all obtained, and the ARG of image is completed and the integrated segments of objects in image can be extracted using relational attribute and space information. The results show a good average integrity of objects segmented from experiment images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional methods for model absolute orientation of aerial imagery are mainly based on some ground control points whose three-dimensional co-ordinates are available and use highly accurate positioning algorithms or semi-automatically manual arrangement to locate them on the image plain. Usually, it is difficult to find some control point features in 3D-vector map using model absolute orientation of aerial imagery. However, the existed 3D vector map can provide sufficient details both on the maps and the images to ensure a lot of well-distributed and unchanged linear features can be used for absolute orientation of aerial imagery. In order to meet the requirements of quickly updating 3D-vector map, this paper will discuss a method for semi-automatic model absolute orientation by extracting some liner features from image and then matching with 3D line object in vector map. It can resolve the problem of model absolute orientation instead of using ground control points for quickly updating 3D vector map.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Canny edge detector is widely used in computer vision to locate sharp intensity changes and find object boundaries in an image. The Canny edge detector removes the weak edges by hysteresis threshold and has difficulty to find upper and low thresholds with unimodal gradient magnitude distributions. In this paper, an algorithm, based on finding edge region and background region in the gradient magnitude histogram plot, is proposed which is capable of performing hysteresis threshold fast and adaptively. Its effectiveness is demonstrated on a variety of images, showing its successful application to Canny edge detector. The results of the fast adaptive threshold Canny edge detector we presented are better than the results of Canny edge detector with fixed threshold. Further tests are carried out on all sorts of real data using our method to select thresholds and get the good results. These demonstrate that the proposed algorithm is capable of performing hysteresis threshold fast and adaptively, often better than fixed upper and low threshold Canny edge detector that are run for comparison.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle license plate (VLP) recognition is of great importance to many traffic applications. Though researchers have paid much attention to VLP recognition there has not been a fully operational VLP recognition system yet for many reasons. This paper discusses a valid and practical method for vehicle license plate recognition based on geometry restraints and multi-feature decision including statistical and structural features. In general, the VLP recognition includes the following steps: the location of VLP, character segmentation, and character recognition. This paper discusses the three steps in detail. The characters of VLP are always declining caused by many factors, which makes it more difficult to recognize the characters of VLP, therefore geometry restraints such as the general ratio of length and width, the adjacent edges being perpendicular are used for incline correction. Image Moment has been proved to be invariant to translation, rotation and scaling therefore image moment is used as one feature for character recognition. Stroke is the basic element for writing and hence taking it as a feature is helpful to character recognition. Finally we take the image moment, the strokes and the numbers of each stroke for each character image and some other structural features and statistical features as the multi-feature to match each character image with sample character images so that each character image can be recognized by BP neural net. The proposed method combines statistical and structural features for VLP recognition, and the result shows its validity and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images of outdoor scenes captured in bad weather usually suffer from poor contrast. The reason is that the light reflected from the scene is attenuated by heavy aerosol particles' scattering. And the attenuation increases exponentially with the distances of each scene point from the sensor. In this paper, we propose a simple method to remove weather effects using only a single image. Our algorithm has three main steps. First, based on the exponential law, we propose a new primary model for de-weathering with only one parameter which is simple and easy to manipulate and then we prove its theoretic validity. Second, we bring forward two distance fields to estimate the whole depth information in the image for two different situations. And they are computed in an interactive way. Third, to overcome the defect of a single image which lacks the exact depth information, we propose an interactive post modifying algorithm to adjust the local restoration effect finely. The modifying algorithm is based on two piecewise functions controlled by two parameters. Our algorithm is suitable not only for gray level images but also for RGB color images. Compared with other methods, our method is robust and the results are quite satisfying.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle detection is critical to traffic surveillance and management systems. In real outdoor daylight scenes, shadows cast by moving vehicles are often detected as a part of the moving vehicles since shadows move in accordance with the movement of vehicles, which will heavily affects accuracy of vehicle detection. In this paper, an algorithm is proposed to suppress the moving cast shadow for vehicle detection based on four properties of the moving cast shadow. Simulation results indicate that the proposed algorithm can effectively suppress moving cast shadows of input image, which helps to improve accuracy of vehicle detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is a necessary step in image analysis. Support vector machine is considered a good candidate because of its good generalization performance, especially when the number of training samples is very small and the dimension of feature space is very high. The presented paper investigates image segmentation using support vector machine. Experimental results show that Support vector machine is a promising technique in image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fault is one of main studying objects in remote sensing of geology. It reflects the causation and basic characters of diastrophism. For the complexity and uncertainty characteristic of remote sensing information, this paper discusses how to fuse the information of remote sensing and geosciences based on traditional statistics and intelligence technique puts forward the theory of remote sensing image comprehending agent and builds the model of automatic extracting. Then, based on the theory and model, this paper analyzes the spectrum of fault, and completes the extraction, quantitive statistic analysis and spatial analysis for fault on remote sensing image to reveal the distribution rules and spatial structures to provide the sustainment for understanding the geological structure and tectonic of whole area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a method for ports detection based on the framework of feature level fusion. Bearing in mind the fact that parallel lines and rectangular corners are main features in most ports, and ports are large scale man-made objects, these features are firstly extracted from high-to-moderate resolution optical satellite imagery. Taking account for the balance of data acquisition and spatial resolution, SPOT panchromatic image is used for such feature extraction. Considering the whether conditions in coastal area, which is characterized by rainy and cloudy climate, Radarsat image with the similar spatial resolution as SPOT panchromatic is used to extract linear features along coastal line. Since ships and boats are typical objects that can be easily detected in radar image, these are considered to be supplemented features for ports detection. All extracted features are associated under the framework of feature level fusion. The whole procedure can be described as follows: the first step is preprocessing the input images, mainly histogram stretching to SPOT image for visual quality improvement and filtering to radar image for denoising speckles. Then registration between SPOT and Radarsat image is carried out. Since Radarsat image is used mainly for coastal line extraction and ship detection, rigorous geometric processing is omitted since little attention will be paid to land area. Common polynomial model is used for co-registration with Ground Control Points manually selected from both images. Due to feature level fusion method is adopted, registration accuracy is not as a key factor as in pixel level fusion. The next step will be linear features and rectangular corners detection both in optical and radar image. The detected linear features are then fitted by least mean-square-error algorithm. All the detected features are associated by simply weighted mean algorithm, with different weights to features from optical and radar images. An automatic ports detection system based on the abovementioned procedure is developed. Experiments show that most ports can be detected by our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is one of the most attractive problems in image processing. In image segmentation how to extract useful features from image has become crucial. However, color feature or texture feature, which are both wildly used features, could not process segmentation problem alone very well, especially when images are complex. We adopt a rough-fuzzy set approach, which can properly process high dimensionality, for image segmentation considering both color and texture features. This approach firstly constructs a structure named fuzzy data cube, whose attributes are composed of the fuzzy sets associated with image features. The fuzzy data cube, which can be two-dimension or high-dimension, is as the basic data structure in this method. A definition of the membership function of similarity relation based rough-fuzzy set is introduced as well as the definition of dependency function to evaluate the importance of an attribute for image segmentation. Then we used the rough-fuzzy set to discover the similarity set in fuzzy data cube to obtain the segmentation result. Experiments on mosaic and natural images are presented to demonstrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast and effective image compression method based on wavelet transform and neural network algorithm is proposed in this paper. Firstly, the image is decomposed by wavelet base and the input vector of neural network is formed. Secondly, we use fuzzy learning rule to train the SOFM network, obtain the codebook. This method can efficiently remove correlation in image date, obtaining a low transmission bit stream. The apparent advantage of the methods is to establish statistics codebooks for various image data and it can achieve a high coding efficiency because each treating need not generate codebook. Experiments illustrate that this algorithm is an effective encoding scheme to compress images and the compress ratio excels that of JPEG.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sequence image 3D movement analysis is method that estimates 3D movement parameter from 2D image sequence or 3D "image" (object side) sequence. In theory, monocular and binocular sequence image all can fulfill the three dimensions movement analyses, but there are distinctions in the complexity of computing and accuracy of computing result. In order to compare the accuracy of estimates 3D movement parameter from 2D image sequence or 3D "image" sequence, the article uses ideas of "relative orientation" and "space similitude transform" in photogrammetry for reference, presents an approach that connects the image data with real three dimensions space by making use of the result of calibration and other additional conditions to unify the computing result of monocular and binocular sequence image to object side coordinate system which origin point is one fixed point in object side, this make it possible to compare their results. The experiment results of real data, which use the method, are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although the snake model has been widely used nowadays and obtained quite good results, there are still some key difficulties with it: the narrow capture range and the disability to move into boundary concavities. A new snake model, Gradient Vector Flow snake, can overcome this difficulty. GVF snake model creates its own external force field called GVF force field, this make it insensitive to the initialization and able to move into concave boundary regions. However, GVF snake need large amount of computation and is easily interfered by noise. Accordingly, the wavelet-based GVF snake model can lessen the amount of computation because the multi-scale character of wavelet transform. Due to the different singularities of signal and noise, the module local maxima of their wavelet coefficients vary in different way in multi resolution, so noise can also be distinguished from signal with wavelet-based GVF snake model. The wavelet-based GVF snake model is more quickly and robust contrast to traditional snake model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, large quantity of data at faster repeatability is generated from various remote sensors and prompts for spatio-temporally integrated strategies for data handling and information extraction. Change detection is one of the essential techniques for near real-time analysis in remote sensing of the environment. Assuming overall phonological conditions being comparable, change detection is performed either on two-point timescale (bi-temporal) or on a continuous timescale (temporal trajectory analysis), with the latter having the advantage of minimizing the influence of phenology. Univariate image differencing is the most widely applied change detection algorithm, which involves subtracting one date of imagery from a second date that has been co-registered to the first. With "perfect" data, positive and negative values would represent areas of change in the resultant difference imagery, and zero values representing no change.
To quantify the uncertainty in remotely sensed change detection, a geostatistical framework is proposed so that the mean and standard error in pixel or parcel-based difference between the means of the bi-temporal image/map subsets are computed with spatial and temporal dependence accounted for properly, paving the way for probabilistic mapping of changes. To make the proposed approach adaptable to both regular and irregular sampling schemes, block co-kriging is formulated to evaluate means and standard errors in the differences between spatially aggregated means. The geostatistical framework for uncertainty mapping in bi-temporal image/map-based change detection is tested using simulated data sets, whose spatial and temporal correlation can be prescribed. It is anticipated that the geostatistical approach advocated in this paper will make valuable addition to the literature on spatial uncertainty in remote sensing and change detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GME (Global Motion Estimation) is an important tool widely used in computer vision, video processing, and other fields. In this paper, we propose an efficient, robust, and fast method for the estimation of global motion from compressed image sequences. With regard to global motion models, we adopt six-parameter affine model because of its reasonable tradeoff between complexity and accuracy. In order to improve accuracy and computational efficiency of global motion estimation, we present a new algorithm for segmentation between background and foreground. Then, motion vectors samples associated with background macroblocks are selected to estimate motion model parameters. Lastly, according to the statistics of estimated error, some sample pairs may be rejected as outliers to compensate further for the fact that some of the samples obtained from the P-frame motion vectors are highly erroneous and the parameters may be refined by estimating from the remaining data. The extensive experiments show that the proposed method is efficient and robust in terms of both computational complexity and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion technique has been widely applied to remote sensing image analysis and processing, and methods for quality assessment of image fusion in remote sensing have also become the research issues at home and abroad. Traditional assessment methods combine calculation of quantitative indexes and visual interpretation to compare fused images quantificationally and qualitatively. However, in the existing assessment methods, there are two defects: on one hand, most imdexes lack the theoretic support to compare different fusion methods. On the hand, there is not a uniform preference for most of the quantitative assessment indexes when they are applied to estimate the fusion effects. That is, the spatial resolution and spectral feature could not be analyzed synchronously by these indexes and there is not a general method to unify the spatial and spectral feature assessment. So in this paper, on the basis of the approximate general model of four traditional fusion methods, including Intensity Hue Saturation(IHS) triangle transform fusion, High Pass Filter(HPF) fusion, Principal Component Analysis(PCA) fusion, Wavelet Transform(WT) fusion, a correlation coefficient assessment method based on IHS cylindrical transform is proposed. By experiments, this method can not only get the evaluation results of spatial and spectral features on the basis of uniform preference, but also can acquire the comparison between fusion image sources and fused images, and acquire differences among fusion methods. Compared with the traditional assessment methods, the new methods is more intuitionistic, and in accord with subjective estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new matching measure based on the distance transform weighted by the object contour's characteristic is proposed to enhance the matching contribution of the local feature. In this paper, the characteristic of the contour is expressed with corner membership, and the distance transform is weighted by the difference of the corner membership between the object contour and model, and a more robust matching measure is obtained. The real forward looking infrared (FLIR) images matching experiment shown that the proposed matching measure increase the class-between distance between the object and non-object remarkably, and improve the matching probability and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ferrography image segmentation is the foundation of ferrography image recognition and analysis. This paper presents a method of ferrography image segmentation based on fuzzy neural network. Because neural network can learn, and fuzzy neural network can deal with fuzzy information, it can suit for the fuzzy and complicated ferrography image. From our test, we can see it has better effect on clearing image background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the incessant expanding of information, the processing content of multimedia is becoming more and more extensive. To use the vast of information efficiently and effectively, a content-based retrieval system has been designed. The system is composed of the image gathering agent, the query submitting agent server, the color retrieval agent, the texture retrieval agent, and the shape retrieval agent and search results integration agent and result browser. The image gather agent is responsible for collecting images from network and storing them in image database. The query submitting agent server offers query samples to other agents and offers the cooperation between other agents. The color retrieval agent offers the retrieval ability based color features in the image database. The texture retrieval agent offers the retrieval ability based on texture features in the image database. The shape retrieval agent offers the retrieval ability based on shape features in the image database. Search results integration agent is responsible for integrating the color retrieval agent, the texture retrieval agent, the shape retrieval agent and the query submitting agent and browser, which obtains the retrieval request from the query submitting agent and browser, then sends them to each agent by means of primitive. At meantime, it combines the results returned by each agent and sends them to browser for the user browsing. The experimental results have showed that all agents in the system can work cooperatively to retrieve image information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of remote sensing technology, satellites can collect high spatial resolution images such as SPOT-5 and Quickbird. The SPOT-5 satellite simultaneously collects 5-m panchromatic and 10-m multispectral images, after interpolated in ground station 2.5-m panchromatic image can be provided (5 metres ground resolution in panchromatic mode and 2.5 metres in supermode). The Quick bird satellite simultaneously collects 0.61-m panchromatic and 2.44-m multispectral images. With Images Merged of 2.5-m panchromatic and 10-m multispectral images of SPOT-5, the approximate resolution images as Quick bird multispectral images were acquired. These images acquired with different satellites can be used to detect the change of urban. In this paper, the images of Wuhan University in China acquired with SPOT-5 and Quick bird are used to detect the change of trees in different season. The result shows it is possible to detect the change of trees and some factors that affect the change detection are listed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for detecting ROIs(regions of interest) is proposed. It can locate airport and seashore regions rapidly in a large optical remote sensing image by adopting feature space analysis method. The image is firstly tiled and read into memory block by block for extracting features respectively. Then the extracted features are analyzed by mean shift method to cluster them into homogenous regions, and eventually a threshold can locate the airport and seashore regions. The proposed ROIs detecting method has strong robustness, autonomy and parallelism. It can drastically decrease the difficulty of target detection in large remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sea surface temperature (SST) is a marine variable of influencing the atmosphere, and a sensitive indicator of climatic change. Temperature van refers to the bounded line between two water bodies that have relatively great difference of temperature in the ocean. The gradient of such environmental factors as the sea temperature and salt degree are very various, which make the temperature van area become the invisible protective screen of limiting the scope of activities of fish, impel fish's cluster. It is efficient for fishing to find the temperature van area. Therefore, how to extract the temperature van from various kinds of images is an important content in the research of temperature van. Robert and Sobel are common arithmetic operators of detecting edge of image. But the results show that these two common edge detection can't extract temperature van from SST image efficiently. An algorithm based on grid is brought out in this paper, which can extract temperature van accurately. The experimental results demonstrate the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image fusion is a technique of obtaining high spatial resolution multi-spectral images from low spatial resolution multi-spectral and high spatial resolution panchromatic images. Various techniques exist to perform such fusion. These techniques, however, do not seem to preserve the spectral information content of original multi-spectral image in the fused image. Hence, in this study a recent and efficient technique of fusion based on the Laplace pyramid was attempted and its efficiency was compared with that of the a trous wavelet transformation techniques. Accordingly, a lower resolution multi-spectral image of Ikonos and its high-resolution panchromatic image were fused using the Laplace pyramid and the a trous wavelet transformation fusion technique. The outputs were evaluated using visual comparison, statistical entropy, average gradient and correlation coefficient. Compared with a trous wavelet transformation techniques, the Laplace pyramid technique proved to be a better option since it preserved most of the spectral information content and improved spatial information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study is presented concerning the performance of support vector machines (SVMs) and maximum likelihood classification (MLC) algorithms on texture features. A novel multivariate modeling method--partial least square regression (PLSR) is applied to obtain novel texture features from texture spectrum (TS). Three texture features, together with PLSR-combined TS features, are used in Brodatz texture classification tests. The experiments show: 1) SVM has higher classification precisions and better generalization abilities than MLC no matter what texture features used and more suits to small training set size (TSS) situations; 2) the new proposed feature combination method (PLSR) can greatly improve TS features discrimination ability for MLC, but not for SVM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing image classification is an important and complex problem. Conventional remote sensing image classification methods are mostly based on Bayes' subjective probability theory. Because there are many defects on solving uncertainty problem, new tendency is that mathematical theory of evidence is applied to remote sensing image classification. At first, this paper introduces differences between Dempster-Shafer's(D-S) evidence theory and Bayes' subjective probability theory in solving uncertainty problem, main definitions and algorithms of D-S evidence theory. Especially degree of belief, degree of plausibility and degree of support are the bridges that D-S evidence theory is used in other fields. It emphatically introduced Support function that D-S evidence theory is used on pattern recognition, and degree of support is applied to classification. We acquire degree of support surfaces according to large classes, such as urban land, farmland, forest land, and water, then use "hard classification" to gain initial classification result. If initial classification accuracy is unfitted to acquirement, do reclassification for degree of support surfaces of less than threshold until final classification result reaches satisfying accuracy. We conclude that main advantages of this method are that it can go on reclassification after classification and its classification accuracy is very high. This method has dependable theory, intensive application, easy operation and research potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.