A new method for atmospheric correction of high resolution patches over heterogeneous terrain is presented. This efficient method performs atmospheric correction of high resolution surface pressure variations over a patch where gaseous and aerosol constituents can be assumed constant. This is of interest for the validation of surface reflectance for pixels surrounding Aeronet sites in heterogeneous terrain. The method efficiency stems from the smooth variations with surface pressure of the functions used in the atmospheric correction which is exploited to decouple the high resolution variation of elevation/pressure in the atmospheric correction process. This results in very few radiative transfer code evaluations independent of the number of high resolution pixels in the patch. The method allows pressure correction at every point of a high resolution scene decreasing the errors in heterogeneous terrains of current methods by up to two orders of magnitude. The technique can be applied for calibration and validation of surface reflectance to provide a much greater volume of data for performance evaluation.
Today, video surveillance systems produce thousands of terabytes of data. This source of information can be very valuable, as it contains spatio-temporal information about abnormal, similar or periodic activities. However, a search for certain situations or activities in unstructured large-scale video footage can be exhausting or even pointless. Searching surveillance video footage is extremely difficult due to the apparent similarity of situations, especially for human observers. In order to keep this amount manageable and hence usable, this paper aims at clustering situations regarding their visual content as well as motion patterns. Besides standard image content descriptors like HOG, we present and investigate novel descriptors, called Franklets, which explicitly encode motion patterns for certain image regions. Slow feature analysis (SFA) will be performed for dimension reduction based on the temporal variance of the features. By reducing the dimension with SFA, a higher feature discrimination can be reached compared to standard PCA dimension reduction. The effects of dimension reduction via SFA will be investigated in this paper. Cluster results on real data from the Hamburg Harbour Anniversary 2014 will be presented with both, HOG feature descriptors and Franklets. Furthermore, we could show that by using SFA an improvement to standard PCA techniques could be achieved. Finally, an application to visual clustering with self-organizing maps will be introduced.
With the transition towards renewable energies, electricity suppliers are faced with huge challenges. Especially the
increasing integration of solar power systems into the grid gets more and more complicated because of their dynamic
feed-in capacity. To assist the stabilization of the grid, the feed-in capacity of a solar power system within the next hours,
minutes and even seconds should be known in advance. In this work, we present a consumer camera-based system for
forecasting the feed-in capacity of a solar system for a horizon of 10 seconds. A camera is targeted at the sky and clouds
are segmented, detected and tracked. A quantitative prediction of the insolation is performed based on the tracked clouds.
Image data as well as truth data for the feed-in capacity was synchronously collected at one Hz using a small solar panel,
a resistor and a measuring device. Preliminary results demonstrate both the applicability and the limits of the proposed
system.
Due to decreasing sensor prices and increasing processing performance, the use of multiple cameras in vehicles becomes
an attractive possibility for environment perception. This contribution focuses on non-overlapping multi-camera configurations
on a mobile platform and its purely vision-based self-calibration as well as its restrictions. The usage of corresponding
features between the cameras is very difficult to realize and likely to fail due to different appearances in different views
and motion-dependent time delays. Instead, the hand-eye calibration (HEC) technique based on visual odometry is considered
to solve this problem by exploiting the cameras motions. For that purpose, this contribution presents an approach
to continuously calibrate cameras by making use of the so-called motion adjustment (MA) and an IEKF. Visual odometry
in driving vehicles often struggles in estimating the relative magnitudes of the translational motion, which is crucial for
the HEC. So, MA simultaneously estimates the extrinsic parameters up to scale as well as the relative motion magnitudes.
Furthermore, the estimation process is embedded into a global fusion framework to benefit from the redundant information
resulting from multiple cameras in order to yield more robust results. This paper presents results with simulated and real
data.
The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated
images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to
search for similar objects in an own image database. As the computational performance and the memory capacity of
mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for
example, if the images are represented with global image features or if the search is done using EXIF or textual
metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or
if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend
is needed. In this work, we present a content-based image retrieval system with a client server architecture working
with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word
model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the
most similar images of the database highlighting the visual information which is common with the query image.
Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased
image retrieval.
KEYWORDS: Visibility, Image restoration, Video processing, Visibility through fog, Visualization, Cameras, Image processing, Information visualization, Fiber optic gyroscopes, Video
In this contribution we propose methods for vehicle detection and tracking for the Advanced Driver Assistance
Systems (ADAS) that work under extremely adverse weather conditions. Most of the state-of-the-art vehicle
detection and tracking methods are based either on appearance based vehicle recognition or on extraction and
tracking of dedicated image key points. Visibility deterioration due to rain drops and water streaks on the
windshield, swirling spray, and fog lead to a drastic performance reduction or even to a complete failure of these
approaches. In this contribution we propose several methods for coping with these phenomena. In addition to
an extension of the feature-based tracking method, which copes with outliers and temporarily disappearing key
points, we present a detection and tracking method based on search for vehicle rear lights and whole rear views
in the saturation channel. Utilization of symmetry operators and search space restriction allows to detect and
track vehicles even in pouring rain conditions. Furthermore, we present two applications of the above-described
methods. Estimation of the strength of spray produced by preceding vehicles allows to draw conclusions about
the overall visibility conditions and to adjust the intensity of one's own rear lights. Besides, a restoration of
deteriorated image regions becomes possible.
This paper adresses the issue of generating a panoramic view and a panoramic depth maps using only a single camera. The
proposed approach first estimates the egomotion of the camera. Based on this information, a particle filter approximates
the 3D structure of the scene. Hence, 3D scene points are modeled probabilistically. These points are accumulated in a
cylindric coordinate system. The probabilistic representation of 3D points is used to handle the problem of visualizing
occluding and occluded scene points in a noisy environment to get a stable data visualization. This approach can be easily
extended to calibrated multi-camera applications (even with non-overlapping field of views).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.