In this article, the problem of the lack of robustness and reliability of surveillance systems through disturbing security irrelevant events such as tree shaking, birds flying, etc. is tackled. A novel scene analysis approach based on hypergraph-based trajectories is introduced for reducing the rate of false positives. The conception of hypergraph-based trajectories relaxes the notion of point-based trajectories by allowing multiple incidences between subsequent points in time. This allows a principled approach for the extraction of robust features based on bounding boxes resulting from existing 3rd party detection methods. The experimental part is based on data collected from single-view camera systems over a two-year non-stop recording in the frame of the Austrian KIRAS project SKIN1 on protecting critical infrastructure. The results show substantial reduction of irrelevant false alarms, hence improving the overall system’s performance.
This paper proposes a novel approach to determine the texture periodicity, the texture element size and further characteristics like the area of the basin of attraction in the case of computing the similarity of a test image patch with a reference. The presented method utilizes the properties of a novel metric, the so-called discrepancy norm. Due to the Lipschitz and the monotonicity property the discrepancy norm distinguishes itself from other metrics by well-formed and stable convergence regions. Both the periodicity and the convergence regions are closely related and have an immediate impact on the performance of a subsequent template matching and evaluation step. The general form of the proposed approach relies on the generation of discrepancy norm induced similarity maps at random positions in the image. By applying standard image processing operations like watershed and blob analysis on the similarity maps a robust estimation of the characteristic periodicity can be computed. From the general approach a tailored version for orthogonal aligned textures is derived which shows robustness to noise disturbed images and is suitable for estimation on near regular textures. In an experimental set-up the estimation performance is tested on samples of standardized image databases and is compared with state-of-theart methods. Results show that the proposed method is applicable to a wide range of nearly regular textures and estimation results keeps up with current methods. When adding a hypothesis generation/selection mechanism it even outperforms the current state-or-the-art.
We address the challenge of parallelization development of industrial high-performance inspection systems comparing a conventional parallelization approach versus an auto-parallelized technique. Therefore, we introduce the functional array processing language Single Assignment C (SAC), which relies on a hardware virtualization concept for automated, parallel machine code generation for multi-core CPUs and GPUs. Additional software engineering aspects like programmability, productivity, understandability, maintainability, and resulting achieved gain in performance are discussed from the point of view of a developer. With several illustrative benchmarking examples from the field of image processing and machine learning, the relationship between runtime performance and efficiency of development is analyzed.
Time-of-flight (TOF) full-field range cameras use a correlative imaging technique to generate three-dimensional measurements of the environment. Though reliable and cheap they have the disadvantage of high measurement noise and errors that limit the practical use of these cameras in industrial applications. We show how some of these limitations can be overcome with standard image processing techniques specially adapted to TOF camera data. Additional information in the multimodal images recorded in this setting, and not available in standard image processing settings, can be used to improve reduction of measurement noise. Three extensions of standard techniques, wavelet thresholding, adaptive smoothing on a clustering based image segmentation, and an extended anisotropic diffusion filtering, make use of this information and are compared on synthetic data and on data acquired from two different off-the-shelf TOF cameras. Of these methods, the adapted anisotropic diffusion technique gives best results, and is implementable to perform in real time using current graphics processing unit (GPU) hardware. Like traditional anisotropic diffusion, it requires some parameter adaptation to the scene characteristics, but allows for low visualization delay and improved visualization of moving objects by avoiding long averaging periods when compared to traditional TOF image denoising.
In this paper we introduce a novel algorithm for automatic fault detection in textures. We study the problem of
finding a defect in regularly textured images with an approach based on a template matching principle.
We aim at registering patches of an input image in a defect-free reference sample according to some admissible
transformations. This approach becomes feasible by introducing the so-called discrepancy norm as fitness function
which shows particular behavior like a monotonicity and a Lipschitz property. The proposed approach relies
only on few parameters which makes it an easily adaptable algorithm for industrial applications and, above all,
it avoids complex tuning of configuration parameters.
Experiments demonstrate the feasibility and the reliability of the proposed algorithms with textures from
real-world applications in the context of quality inspection of woven textiles.
In this paper the problem of high performance software engineering is addressed in the context of image processing
regarding productivity and optimized exploitation of hardware resources. Therefore, we introduce the functional
array processing language Single Assignment C (SaC), which relies on a hardware virtualization concept for
automated, parallel machine code generation. An illustrative benchmarking example proves both utility and
adequacy of SaC for image processing.
In order to measure the 3D structure of a number of objects a comparably new technique in computer vision
exists, namely time of flight (TOF) cameras. The overall principle is rather easy and has been applied using
sound or light for a long time in all kind of sonar and lidar systems. However in this approach one uses modulated
light waves and receives the signals by a parallel pixel array structure. Out of the travelling time at each pixel one
can estimate the depth structure of a distant object. The technique requires measuring the intensity differences
and ratios of several pictures with extremely high accuracy; therefore one faces in practice rather high noise
levels. Object features as reflectance and roughness influence the measurement results. This leads to partly
high noise levels with variances dependent on the illumination and material parameters. It can be shown that
a reciprocal relation between the variance of the phase and the squared amplitude of the signals exists. On the
other hand, objects can be distinguished using these dependencies on surface characteristics. It is shown that
based on local variances assigned to separated objects appropriate denoising can be performed based on Wavelets
and edge-preserving smoothing methods.
Thin-film sensors for use in automotive or aeronautic applications must conform to very high quality standards.
Due to defects that cannot be addressed by conventional electronic measurements, an accurate optical inspection
is imperative to ensure long-term quality aspects of the produced thin-film sensor. In this particular case,
resolutions of 1 &mgr;m per pixel are necessary to meet the required high quality standards. Furthermore, it has to
be guaranteed that defects are detected robustly with high reliability.
In this paper, a new method is proposed that solves the problem of handling local deformations due to
production variabilities without having to use computational intensive local image registration operations. The
main idea of this method is based on a combination of efficient morphological preprocessing and a multi-step
comparison strategy based on logical implication. The main advantage of this approach is that the neighborhood
operations that care for the robustness of the image comparison can be computed in advance and stored in a
modified reference image. By virtue of this approach, no further neighborhood operations have to be carried out
on the acquired test image during inspection time.
A systematic, experimental study shows that this method is superior to existing approaches concerning
reliability, robustness, and computational efficiency. As a result, the requirements of high-resolution inspection
and high-performance throughput while accounting for local deformations are met very well by the implemented
inspection system. The work is substantiated with theoretical arguments and a comprehensive analysis of the
obtained performance and practical usability in the above-mentioned, challenging industrial environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.