K. Kavoura, M. Kordouli, K. Nikolakopoulos, P. Elias, O. Sykioti, V. Tsagaris, G. Drakatos, Th. Rondoyanni, G. Tsiambaos, N. Sabatakakis, V. Anastasopoulos
Landslide phenomena constitute a major geological hazard in Greece and especially in the western part of the country as a result of anthropogenic activities, growing urbanization and uncontrolled land – use. More frequent triggering events and increased susceptibility of the ground surface to instabilities as consequence of climate change impacts (continued deforestation mainly due to the devastating forest wildfires and extreme meteorological events) have also increased the landslide risk. The studied landslide occurrence named “Platanos” has been selected within the framework of “Landslide Vulnerability Model – LAVMO” project that aims at creating a persistently updated electronic platform assessing risks related with landslides. It is a coastal area situated between Korinthos and Patras at the northwestern part of the elongated graben of the Corinth Gulf. The paper presents the combined use of geological-geotechnical insitu data, remote sensing data and GIS techniques for the evaluation of a subsurface geological model. High accuracy Digital Surface Model (DSM), airphotos mosaic and satellite data, with a spatial resolution of 0.5m were used for an othophoto base map compilation of the study area. Geological – geotechnical data obtained from exploratory boreholes were digitized and implemented in a GIS platform with engineering geological maps for a three – dimensional subsurface model evaluation. This model is provided for being combined with inclinometer measurements for sliding surface location through the instability zone.
Satellite hyperspectral imagery and especially missions like CHRIS Proba provide new capabilities for
environmental and geological studies since they offer high spectral and spatial resolution. This work exploits the
potential of CHRIS Proba data to be used for classification purposed of areas with high geological interest. For this
purpose different classification methods are employed while the matched filtering (PCT-BSS) approach seems to be the
most promising. The approach is tested in the area of Araxos peninsula in Greece, which is an area of high
environmental and geological interest.
KEYWORDS: Image fusion, Image processing, Information theory, Night vision, Optical engineering, Data fusion, Medical imaging, RGB color model, Principal component analysis, Visualization
A measure for objectively assessing the performance of color image fusion methods is proposed. Two different aspects are considered in establishing the proposed measure-namely, the amount of common information between the source images and the final fused image as well as the distribution of color information in the final image in order to achieve optimal color representation. Mutual information and conditional mutual information are employed in order to assess information transfer between the source images and the final fused image. Simultaneously, the distribution of colors in the final image is explored by means of the hue coordinate in the perceptually uniform CIELAB space. The proposed measure does not depend on the use of a target-fused image for the objective performance evaluation. It is employed experimentally for objective evaluation of fusion methods in the cases of medical imaging and night vision data.
The use of satellite imagery and specifically SAR data for vessel detection and identification has attracted
researchers during the last decade. The objective of this work is to provide a novel approach for ship identification based
mainly on polarimetric data, taking into consideration the different behaviour of the ship in various polarizations. For
this purpose new measures and accordingly a new feature vector is proposed. The feature vector is employed in order to
create a vessel signatures database and its efficiency is tested on ASAR data.
The objective evaluation of the performance of pixel level fusion methods is addressed. For this purpose a global measure based on information theory is proposed. The measure employs mutual and conditional mutual information to assess and represent the amount of information transferred from the source images to the final fused gray-scale image. Accordingly, the common information contained in the source images is considered only once in the performance evaluation procedure. The experimental results clarify the applicability of the proposed measure in comparing different fusion methods or in optimizing the parameters of a specific algorithm.
A comparison of different classification approaches for multitemporal SAR images data sets is provided in this work. The aim is to assess the performance of estimators of the backscatter temporal variability in terms of classification accuracy for a typical four-class problem. Different approaches in forming an appropriate feature vector are discussed and compared with multichannel classifiers like the fuzzy k-means. Finally, a classifier that employs a feature fusion step based on principal components analysis is proven promising since it provides increased classification accuracy and reduced computational complexity.
The Kullback-Leibler (KL) divergence, which is a fundamental concept in information theory used to quantify probability density differences, is employed in assessing the color content of digital images. For this purpose, digital images are encoded in the CIELAB color space and modeled as discrete random fields, which are assumed to be described sufficiently by 3-D probability density functions. Subsequently, using the KL divergence, a global quality assessment of an image is presented as the information content of the CIELAB encoding of the image relative to channel capacity. This is expressed by an image with "maximum realizable color information" (MRCI), which we define. Additionally, 1-D estimates of the marginal distributions in luminance, chroma, and hue are explored, and the proposed quality assessment is examined relative to KL divergences based on these distributions. The proposed measure is tested using various color images, pseudocolor representations and different renderings of the same scene. Test images and a MATLAB implementation of the measure are available online at http://www.ellab.physics.upatras.gr/PersonalPages/VTsagaris/research.htm.
A novel procedure which aims in increasing the spatial resolution of multispectral data and simultaneously creates a high quality RGB fused representation is proposed in this paper. For this purpose, neural networks are employed and a successive training procedure is applied in order to incorporate in the network structure knowledge about recovering lost frequencies and thus giving fine resolution output color images. MERIS multispectral data are employed to demonstrate the performance of the proposed method.
An objective measure for evaluating the performance of pixel level fusion methods is introduced in this work. The proposed measure employs mutual information and conditional mutual information in order to assess and represent the amount of information transferred from the source images to the final fused greyscale image. Accordingly, the common information contained in the source images is considered only once in the formation of the final image. The measure can be used regardless the number of source images or the assumptions about the intensity values and there is no need for an ideal or test image. The experimental results clarify the usefulness of the proposed measure.
In this work a pixel-level fusion technique for enhancing the visual interpretation of multispectral images is proposed. The technique takes into consideration the inherent high correlation of the RGB bands of natural color images, a fact strictly related to the color perception attributes of the human eye. The method provides dimensionality reduction in the multispectral vector space, while the resulting RGB color image tends to be perceptually optimal. The proposed method is compared with two other existing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.