PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes a new numerical technique for quantitative characterization of rough surface in terms of self-affine fractal measures using non-planar reference surfaces. The algorithm is used to extract the scaling properties of coating surfaces to provide insight into the deposition process by characterizing the condition of the substrate, the evolving surface morphology, and the bulk coating properties for a selection of deposition/sputtering parameters. We have employed these numerical technique to analyze the surface topography of a variety of coatings in an effort to better understand the intrinsic structure and to determine if a correlation exists between the deposition parameters the surface topography. Our results indicate that the coating structures are consistent wit a self-affine fractal description and the extracted fractal measures are sensitive to the variations in the deposition parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general image analysis and segmentation method using fuzzy set classification and learning is described. The method uses a learned fuzzy representation of pixel region characteristics, based upon the conjunction and disjunction of extracted and derived fuzzy color and texture features. Both positive and negative exemplars of some visually apparent characteristic which forms the basis of the inspection, input by a human operator, are used together with a clustering algorithm to construct positive similarity membership functions and negative similarity membership functions. Using these composite fuzzified images, P and N, are produced using fuzzy union. Classification is accomplished via image defuzzification, whereby linguistic meaning is assigned to each pixel in the fuzzy set using a fuzzy inference operation. The technique permits: (1) strict color and texture discrimination, (2) machine learning of color and texture characteristics of regions, (3) and judicious labeling of each pixel based upon leaned fuzzy representation and fuzzy classification. This approach appears ideal for applications involving visual inspection and allows the development of image-based inspection systems which may be trained and used by relatively unskilled workers. We show three different examples involving the visual inspection of mixed waste drums, lumber and woven fabric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper demonstrates result of wavelet-based restoration for scenes with pixel-scale features, where noise amplification is controlled using a tunable parameter. The image acquisition model is chosen based on the so-called C/D/C system model that accounts for system blur, for the effects of aliasing, and for additive noise. By way of wavelet domain modeling, both the image acquisition kernel and the representations, of scenes an image become discrete. Consequently, the image acquisition kernel and the representations of scenes and images become discrete. Consequently, the image restoration problem is formed as a discrete least squares problem in the wavelet domain. The treatment of noise is real to the singular values of the image acquisition kernel. Pixel-scale features can be restored exactly in the absence of noise, and result are similar in the presence of noise, except for some noise- amplification and truncation artifacts. We devise an automated empirical procedure that provides a choice of the restoration parameters which conservatively avoids noise- amplification. This paper extends work in wavelet-based restoration, and builds on research in C/D/C model-based restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of multi-image classification is to identify and label 'similar regions' within a scene. The ability to correctly classify a remotely sensed multi-image of a scene is affected by the ability of the classification process to adequately compensate for the effects of atmospheric variations and sensor anomalies. better classification may be obtained if the multi-image is preprocessed before classification, so as to reduce the adverse effects of image formation. In this paper, we discus the overall impact on multi-spectral image classification when the retinex image enhancement algorithm that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The retinex has ben successfully applied to the enhancement of many different types of grayscale and color images. We show in this paper that retinex preprocessing improves the spatial structure of multi-spectral images ad thus provides better within-class variations with an would otherwise be obtained without the preprocessing. For a series of multi- spectral images obtained with diffuse and direct lighting, we show that without retinex preprocessing the class spectral signatures vary substantially with the lighting conditions. Whereas multi-dimensional clustering without preprocessing produced one-class homogeneous regions, the classification on the preprocessed images produced multi- class non-homogeneous regions. This lack of homogeneity is explained by the interaction between different agronomic treatments applied to the regions: the preprocessed images are closer to ground truth. The principle advantage that the retinex offers is that for different lighting conditions classifications derived from the retinex preprocessed images look remarkably 'similar', and thus more consistent, whereas classifications derived for the original images, without preprocessing, are much less similar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video systems have seen a resurgence in military applications since the recent proliferation of unmanned aerial vehicles (UAVs). Video system offer light weight, low cost, and proven COTS technology. Video has not proven to be a panacea, however, as generally available storage and transmission systems are limited in bandwidth. Digital video systems collect data at rates of up to 270 Mbs; typical transmission bandwidths range from 9600 baud to 10 Mbs. Either extended transmission times or data compression are needed to handle video bit streams. Video compression algorithm have been developed and evaluated in the commercial broadcast and entertainment industry. The Motion Pictures Expert Group developed MPEG-1 to compress videos to CD ROM bandwidths and MPEG-2 to cover the range of 5-10 Mbs and higher. Commercial technology has not extended to lower bandwidths, nor has the impact of MPEG compression for military applications been demonstrated. Using digitized video collected by UAV systems, the effects of data compression on image interpretability and task satisfaction were investigated. Using both MPEG-2 and frame decimation, video clips were compressed to rates of 6MPS, 1.5 Mbs, and 0.256 Mbs. Experienced image analysts provided task satisfaction estimates and National Image Interpretability Rating Scale ratings on the compressed and uncompressed video clips. Result were analyzed to define the effects of compression rate and method on interpretability and task satisfaction. Lossless compression was estimated to occur at approximately 10 Mbs and frame decimation was superior to MPEG-2 at low bit rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problems associated with the dynamic change detection of an image sequence in the compressed domain. In particular, wavelet compression is considered here. With its multi-resolutional decomposition there are many different routes of image compression with wavelets. This paper will present some preliminary results of different compression schemes on spatio-temporal change detection metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space-variant (SV) digital image restoration methods attempt to restore images degraded by blurs that vary over the image field. One specific source of SV blurs is that of geometrical optical aberrations, which divert light rays as they pass through the optical system away from an ideal focal point. For simple optical system, aberrations can become significant even at moderate field angles. Restoration methods have been developed for some space- variant aberrations when they are individually dominant, but such dominance is not typically characteristic of conventional optical systems. In this paper, an iterative method of restoration that is applicable to generalized, known space-variant blurs is applied to simulations of images generated with a spherical lines. The method is based on the Gauss-Seidel method of solution to systems of linear equations. The method is applied to sub-images having off- axis displacements of up to 453 pixels, and found to be superior in restoration effectiveness to Fourier methods in that range of field angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presented a DCT-based image coder optimized for transmission over binary symmetric channels. The proposed coder uses a robust channel-optimized trellis-coded quantization stage that is designed to optimize the image coding based on the channel characteristics. This optimization is performed only at the level of the source encoder, and does not include any channel coding for error protection. The robust nature of the coder increases the security level of the encoded bit stream and provides a much more visually pleasing rendition of the decoded image. Consequently, the proposed robust channel-optimized image coder is especially suitable for wireless transmission due to its reduced completely, its robustness to non-stationary signal and channels, and its increased security level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a computationally efficient method for fast retrieval of color images of multimedia and imaging databases. Although the proposed algorithm can operate in an n-dimensional feature space for search, in our experiments we use only one 3D vector as key for indexing and searching color pictures of the selected archives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a method for assessing the information density and efficiency of hyperspectral imaging system. The approach computes the information density of the acquired signal as a function of the hyperspectral system design, signal-to-noise ratio, and statistics of the scene radiance. Information efficiency is the ratio of the information density to the data density. The assessment can be used in system design - for example, to optimize information efficiency with respect to the number of spectral bands. Experimental results illustrate that information efficiency exhibits a single distinct maximum as a function of the number of spectral bands, indicating the design with peak information efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Symmetric axis based representations have been widely employed to enhance visualization and to enable quantitative analysis, classification, and registration of medical images. Although the basic idea of shape representation via local symmetries is very old, recently, various new techniques for extracting local symmetries are proposed. Despite seemingly different tools, the main - if not only - difference among these new methods is how the computation is carried out. Recently, by Tari and Shah, a new method for computing symmetries are proposed, and the comparison of the method to the related works is provided. The method constructs a nested symmetry set of an increasing degree of symmetry and decreasing dimension. This is achieved by examining the local geometry of a new distance function. Because the method doesn't suppress any of the symmetry based representations. In this paper, a computational implementation for assigning perceptual meaning and significance to the points in the symmetry set is provided. The coloring scheme allows recovery of the features of interest such as the shape skeletons from the complicated symmetry representation. The method is applicable to arbitrary data including color and multi-modality imags. On the computational side, for a 256 X 256 binary image, two minutes on a low-end Pentium machine is sufficient to compute both the distance function and the colored nested symmetries at four scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time image processing systems are very complex real time systems that challenge the limitations of hardware resources. Typically, real time image processing systems are implemented on pipeline or multi processor architectures. For multi processor architectures were distinguish shared memory and/or inter processor communication links for the communication. In the following paper we present a system design for real time image processing systems on multi processor architectures using inter processor communication links for the communication. The paper focuses on the specific design issues important for real time image processing systems. A detailed overview over the complete design is given by presenting the following topics: hardware topology, programming model, inter processor communication, image processing infrastructure and image processing. The system design is illustrated using the ground to ground ATDT system developed by Computing Devices Canada for its Fire- Control and Surveillance products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the design and use of a toolbox that integrates several texture analysis algorithms is presented. The most important statistical, spectral and multiresolution methods are implemented. Examples of the toolbox interfaces are given. The interface windows for the algorithms and classifiers are explained. Experimental result are presented which show the application of the toolbox algorithms for image classification and segmentation. Textures that are transformed can also be classified, an example is presented using a wavelet algorithm Segmentation of remote sensing images is discussed using the co-occurrence matrix method. Classification with extrema features is demonstrated for different sets of images. An application of the algorithm to segmenting industrial images using logical transform algorithm is discussed. The organization of the toolbox is in a hierarchical manner. It also implements auxiliary methods such as edge detection and noise filtering that aid in texture analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary objective of this project is to define a methodology to depict the motion of deep convective cloud systems as observed form satellite imagery. These clouds are defined as clusters of pixels with Cloud Top Pressure (IPC) <EQ 440 millibars and Cloud Optical Thickness (TAU) >= 23 which are high in the atmosphere and sufficiently thick to produce significant rainfall. Clouds are one of the major factors in understanding the earth's climate. Evaluating cloud motion is important in understanding atmospheric dynamics and visualizations are vital because they provide a good way to observe change. IPC and TAU values have been collected for April of 1989 from the International Satellite Cloud Climatology Project, low resolution database for the northern latitudes between 30 and 60 degrees. Each of the 240 IPC and 240 TAU images consisted of 12 rows and 144 columns with each pixel representing a 280 km square on the globe collected in three-hour intervals. Individual images were color coded according to land, sea and clouds before being put into motion. Six animations have been produced which start with the original images, progress to include daily composite images and culminate with a collage. Animations of the original images have the advantage of relatively short intervals between still frames but have many undefined pixels, which are eliminated in the composites. The results of this project can serve as an example of how to improve the visualization of time varying image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A major problem associated with vector quantization is the complexity of exhaustive codebook search. This problem has hindered the use of this powerful technique for lossy image compression. An exhaustive codebook search requires that an input vector be compared against each code vector in the codebook in order to find the code vector that yields the minimum distortion. Because an exhaustive search does not capitalize on any underlying structure of the code vectors in hyperspace, other researchers have proposed technique that exploit codebook structure, but these technique typically result in sub-optimal distortion. We propose a new method that exploits the nearest neighbor structure of code vectors and significantly reduces the number of computations required in the search process. However, this technique does not introduce additional distortion, and is thus optimal. Our method requires a one time precomputation and a small increase in the memory required to store the codebook. In the best case, arising when the code vectors are largely dispersed in the hyperspace, our method requires only partial search of the codewords. In the worst case, our method requires a full search of the codebook. Since the method depends on the structure of the code vectors in the hyperspace, it is difficult to determine its efficiency in all cases, but test on typical image compression tasks have shown that this method offers on average an 81.16 percent reduction in the total number of multiples, additions and subtractions required as compared to full search.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The geometric and statistical physical concepts of dynamic scale-space paradigms are presented and juxtaposed to those of mathematical morphology. It turns out that the dynamic paradigms can be applied to, substantiate and even generalize the morphological techniques and paradigms. In particular the importance of the dynamic scale-space concepts in granulometry by means of size densities or statistical morphological operations, and in morphological scale-space theories by means of parabolic dilations and watersheds is pointed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new criterion based on the Minimum Description Length (MDL) principle is proposed to guide a region growing procedure. Since the MDL principle is known to realize the compromise between complexity of modeling and adequacy to the data in a homogeneous way, it is well suited for detecting cartographic objects in aerial images, because their representation has to be simple and realistic. The procedure is dedicated to segment disparity maps or Digital Evaluation Models into planar regions, and so detect 3D planar patches in the scenes. The principle is shown to be able to introduce constraints on the segmentation. By making one term of the Description Length vary, different levels of representation of the scene are obtained. This makes it possible to get concurrent 3D planar hypotheses. The procedure is applied on disparity maps or on digital elevation models computed on gray level aerial high resolution stereo pairs, in urban context. On these data, those roofs and visible facades are extracted by the procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of noise, edge detection seldom gives the whole contour of objects in images. We developed a new method to better extract information provided by partial edge detection in order to segment multi-thresholdable images. It consists in looking for separating bipoints corresponding to the normals to the most striking boundaries. The thresholds take their values within the intervals defined by these bipoints. The probabilistic model proposed in this paper is not dependent on the distribution on pixel values and allows to determine the different family of intervals corresponding to a threshold domain. This method was tested with success on positron Emission Tomography images and on a set of 4000 fluorescence images. It demonstrates a good efficiency despite the low contrast and high blurring of such images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We used the multiresolution property of the discrete wavelet transform to detect edges in noisy images. In our approach, we used wavelets corresponding to 1st and 2nd derivatives to generate noisy wavelet coefficients. Then, we compared the wavelet coefficients as a function of scale to reduce the effects of noise. In addition, our approach considered the change in edge position as a function of scale. We analyzed 1D experimental result and compared 2D results of noisy images to a more common edge detection method. Our results lead to improved edge detection result in noisy images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general model is developed to estimate target detection ranges in the visible spectrum. The model takes into account the background and target luminance, atmospheric conditions, target reflectivity, size and angular velocity, sun and sky radiance, the spectral response and the angular resolution of the observation device as well as the time dedicated for scanning the field of regard. Computer simulations as well as experimental results were used to validate the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a robust image filtering algorithms that provide preservation of fine details and strong speckle nose suppression. They were derived using our approach for robust filter design. According to this approach, we used M-estimators and R-estimators derived from the statistical theory of rank tests. At the first stage, to provide impulsive noise rejection, the introduced robust image filters use the central pixel of the filtering window and the redescending M-estimators combined with the median or Wilcoxon estimators. At the second stage, to provide multiplicative noise suppression, a modified Sigma filter that implements the iterative calculation scheme of a redescending M-estimator, is used. The proposed robust rank detail-p[reserving filter demonstrated excellent fine detail preservation and impulsive noise removal. Visual and analytical analysis of these results shows the algorithm proposed on the base of RM approach provide good visual quality of processed data and possess good speckle noise attenuation capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion was applied to Landsat thematic mapper multispectral and coregistrated higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point- wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-IR photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOt 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous dirigibles - aerial robots that are a blimp controlled by computer based on information gathered by sensors - are a new and promising research field in Robotics, offering several original work opportunities. One of them is the study of visual navigation of UAVs. In the work described in this paper, a Computer Vision and Control system was developed to perform automatically very simple navigation task for a small indoor blimp. The vision system is able to track artificial visual beacons - objects with known geometrical properties - and from them a geometrical methodology can extract information about orientation of the blimp. The tracking of natural landmarks is also a possibility for the vision technique developed. The control system uses that data to keep the dirigible on a programmed orientation. Experimental results showing the correct and efficient functioning of the system are shown and have your implications and future possibilities discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper present a new method for video compression. The discussed techniques consider video frames as a set of correlated images. Common approach to the problem of compression of correlated images is to use some orthogonal transform, for example cosine or wavelet transform, in order to remove the correlation among images and then to compress resolution coefficients using already known compression technique such as JPEG or EZW. However, the most optimal representation for removing of correlation among images is Karhunen-Loeve (KL) transform. In the paper we apply recently proposed Optimal Image Coding using KL transform method (OICKL) based on this approach. In order to take into account the nature of video we use Triangle Motion Compensation to improve correlation among frames. Experimental part compares the performance of plain OICKL codec with OICKL and motion compensation combined. Recommendations concerning using of motion compensation with OICKL technique are worked out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The autonomous navigation of a mobile vehicle can be described as the task it undertakes to move itself in the environment through a series of positions based on information based on information gathered by its sensors. In order to accomplish this task, the vehicle has to cope with two main subtasks namely obstacle avoidance and self localization. The latter implies in the ability to determine its position and orientation with respect to the environment. This work describes a simple but efficient method that performs pose estimation for a mobile vehicle based on visual information from artificial landmarks using a sequence of frames from an uncalibrated camera. The landmark is segmented from image sequences and the vehicle's localization is computed using landmark geometric properties and vehicle's motion vector. This methodology can be easily extended to be used by different types of mobile agents. One of the key advantages is that it is computationally, efficient making it suitable for real time navigation. Experiments conducted with a Nomad 200 mobile robot equipped with a color camera systems have shown the method to be repeatable and very robust to noise. Visual measurements were compared with readings for other on-board sensors such as ultrasound with excellent consistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Metropolis Monte Carlo deconvolution is introduced. The actual input data is reconstructed by means of grains according to a probability distribution function defined by the blurred data. As the blurred data is being reconstructed a grain is place in the actual input domain at every or a finite number of reconstruction steps. To test the method a wide Gaussian Impulse Response Function is designed and convolved with an input data set containing 24 points. As the grain size (GS) is reduced the number of Monte Carlo moves and with it the accuracy of the method is increased. The grain sizes ranging from 0.0001 to 1.0 are used. For each GS five different random number seeds are used for accuracy. The mean-square error is calculated and the average MSE is plotted versus the GS. Sample reconstructed functions are also given for each GS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.