PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12272, including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two unconventional probability distribution models for spectral backgrounds are described. Their utility is demonstrated for contrasting problems in detection: of atmospheric plumes and of sub-pixel opaque ground targets. Neither background model belongs to the conventional class of elliptically contoured distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In search and rescue (SAR) missions every minute counts. Semi-collapsed buildings are among the difficult scenarios encountered by search and rescue teams. An UAV-based exploration system can provide crucial information on the accessibility of different sectors, hazards, and injured people. The research project “UAV-Rescue” aims to provide UAV-borne sensing and investigate the use of AI to support this powerful tool. The sensor suite contains a radar sensor for detecting people based on breath and pulse movement. A neural network interprets the extracted data to identify signs of human life and as such persons that need rescuing. We also fuse radar and lidar data to explore the environment of the UAV and obtain a robust basis for simultaneous localization and mapping even under restricted visibility conditions. Additionally, we plan to use AI to support the path planning of the drone taking the digital map as input. Furthermore, AI is leveraged to map intact and damaged building structures. Potentially hazardous gases common to urban settings are tracked. We fuse the acquired information into a model of the explored area with marked locations of potential hazards and people to be rescued. The project also addresses ethical and societal issues raised by the use of UAVs close to people as well as AI supported decision making. The talk will present the system concept including interfaces and sensor fusion approaches. We will show first results of a research project from static and dynamic measurement campaigns demonstrating the capability of radar and lidar based sensing in a complex urban environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the maritime environment, Situational Awareness (SA) is a crucial task for many applications, including the defense of the naval tactical space. In this context, Electro-Optical (EO) sensors and, particularly InfraRed (IR) sensors, contribute to building the Local Area Picture (LAP). The purpose of this study is to face the challenging task of highlighting extended targets with respect to the open sea background without any prior knowledge about the size and position within the images. In this work, only single-frame object detection algorithms have been considered. As this task has been extensively explored in the three-channel color image domain, we adapted some color native state-of-the-art strategies on the IR monochromatic dimension. The algorithms have been tested on a dataset collected through a cooled Medium Wavelength (MW) sensor and an uncooled Long Wavelength (LW) sensor. The ground truth (GT) has been built through direct observation. Each technique has been then evaluated on the two sub-bands images according to broadly used performance indices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most prominent applications of fiber optic Distributed Acoustic Sensing (DAS) is Perimeter Security via fence monitoring, which is possible when we attach a fiber to the fence. In this study, we aim to detect and classify events occurring around said fence, such as climbing, cutting, and bending. For this, we investigate Deep Learning algorithms, more specifically Convolutional Neural Networks (CNN), as a mean to detect anomalies and classify them. We recorded 48,445 samples of the mentioned events, which were carefully processed and labeled. From each record, we exploited multiple data instances, resulting in a large enough training dataset to produce a robust classifier. We report the optimum network architecture that suited our study for both the anomaly detection and classification task. The optimal model is tested before and after deployment on-site, we report the quantified performance on a test set via a confusion matrix, and observations about the model’s behaviour on the field. Furthermore, we compare our trials and results on two types of fences, namely rigid and loose, to show how it affects the performance of the trained CNN models, as the signal propagates differently between rigid and loose clotures. We report an overall accuracy of 96.15% for the optimal anomaly detection model, and a lower 52.9% for the 3-class classification model. These results are explained and commented on. Finally, we conclude by providing an educated proposal for future improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A point cloud can provide a detailed three dimensional (3D) description of a scene. Partitioning of a point cloud into semantic classes is important for scene understanding, which can be used in autonomous navigation for unmanned vehicles and in applications including surveillance, mapping, and reconnaissance. In this paper, we give a review of recent machine learning techniques for semantic segmentation of point clouds from scanning lidars and an overview of model compression techniques. We focus especially on scan-based learning approaches, which operate on single sensor sweeps. These methods do not require data registration and are suitable for real-time applications. We demonstrate how these semantic segmentation techniques can be used in defence applications in surveillance or mapping scenarios with a scanning lidar mounted on a small UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a dataset of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geolocation of vehicles, objects or people is commonly done using global navigation satellite system (GNSS) receivers. Such a receiver for GNSS-based positioning is either built into the vehicle, or separate handheld devices like a smartphone or similar are used. Self-localization in this way is simple and accurate up to a few meters.
Environments where no GNSS service is available require other strategies for self-localization. Especially in the military domain, it is necessary to be prepared for such GNSS-denied scenarios. Awareness of the own position in relation to other units is crucial in military operations, especially where joint operations have to be coordinated geographically and temporally. However, even if a common map-like representation of the terrain is available, precise self-localization relative to this map is not necessarily easy.
In this paper, we propose an approach for LiDAR-based localization of a vehicle-based sensor platform in an urban environment. Our approach is to use 360° scanning LiDAR sensors to generate short-duration point clouds of the local environment. In these point clouds, we detect pole-like 3D features such as traffic sign poles, lampposts or tree trunks. The relative distance and orientation of these features to each other is rather unique, and the matrix of these individual distances and orientations can be used to determine the position of the sensor relative to a current map. This map can either be created in advance for the entire area, or a cooperative preceding vehicle with an equivalent sensor setup can generate it. By matching the found LiDARbased 3D features with those of the map, not only the position of the sensor platform but also its orientation can be determined. We provide first experimental results of the proposed method, which were achieved with measurements by Fraunhofer IOSB’s sensor-equipped vehicle MODISSA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performing specific object detection and recognition at the imaging sensor level, raises many technical and scientific challenges. Today state-of-the-art detection performances are obtained with Deep Convolutional Neural Network (CNN) models. However reaching the expected CNN behavior in terms of sensitivity and specificity require to master the training dataset. We explore in this paper, a new way of acquiring images of military vehicles in sanitized and controlled conditions of the laboratory in order to train a CNN to recognize the same visual signature with real vehicles in realistic outdoor situations. By combining sanitized images, counter-examples and different data augmentation techniques, our investigations aim at reducing the needs of complex outdoor image acquisition. First results demonstrate the feasibility to detect and classify, in real situations, military vehicles by exploiting only signatures from miniature models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bathy LiDAR has over the last ten years seen a dramatic development, in the range of ten times the resolution, dramatically improved hydrographic object detection, operation in higher turbidity waters and complex river environments. There has been a strong increase of the use of the technology globally for applications such as coastal surveys for sea-charts, shoreline erosion monitoring, coastal infrastructure planning, environmental mapping, river surveys for flood risk analysis and mitigation. Leading hydrographic offices have implemented the technology for shallow water surveys in their national coastal mapping programs, and further countries are planning for the doing the same. The latest advancements also include developments of bathy LiDAR for Maritime surveillance applications, where the data must be available in close to real time already in the survey aircraft, to enable immediate analysis and response related to safety at sea and prevention of illegal activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The threat of unmanned aerial vehicles (UAV:s) is well documented during recent conflicts. It has therefore been more important to investigate different means for countering this threat. One of the potential means is to use a laser. The laser may be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. This paper will investigate how the atmosphere and beam jitter due to tracking and platform pointing errors will affect the performance of the laser either used as a sensor, countermeasure or as a weapon.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our work, we will present field tests under various conditions using Differential absorption LIDAR (DIAL) with two CO2 tunable lasers. The setup using two intendent synchronized lasers allows to achieve detection sensitivity that outperforms other setups. This will be shown using measurement data from the field tests under realistic conditions. We will investigate not only various DIAL setups, but also various signal processing algorithms applicable for the two-laser setup. The theoretical analysis will be additionally confirmed by the data obtained during field tests. The results enable to shorten the overall detection time, while maintaining overall detection sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents experimental investigations on active compressive sensing imaging through turbulence. We developed a laboratory testbed in which different compressive sensing configurations have been tested under various turbulence conditions. Series of images of a target were acquired and analyzed using three different metrics. The measurements have been performed under continuous-wave laser illumination at 635 nm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a polarized dual Single Pixel Camera (SPC) operating in the Short Wave Infrared (SWIR) spectral range that uses a total variation based reconstruction method to reconstruct polarized images from an ensemble of compressed measurements. Walsh-Hadamard matrices are used for generating pseudo-random measurements which speed up the reconstruction and enable reconstruction of high resolution images. The system combines a Digital Micromirror Device (DMD), two nearly identical InGaAs photodiodes and two polarization filters. Roughly half of the DMD-mirrors are oriented toward the first photodiode and the complementary DMD-mirrors are oriented toward the second photodiode. Total variation based reconstruction strategies have been implemented, and evaluated on both simulated compressed measurements, and real outdoor scenes using the developed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the proposal, manufacture, and overall testing of a portable measuring sensor based on the fiber-optic Bragg grating (FBG). The sensor is made of two-component silicone rubber (ZA 50 LT) and can be used to monitor the density of car traffic in cities at a maximum speed of 60 kph in one lane. The construction of the sensor, which is over 2 m long and 1.8 cm wide, contains optical fiber with FBG encapsulated in a carbon tube and allows the detection of individual vehicle axles. Functional verification of the sensor was performed in real traffic on overall 761 vehicles (various types) with a high detection success rate of 97.19 %.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of aerosols in general and bioaerosols more specific has gained an increased importance in multiple fields. While environmental scientists are increasingly interested in the impacts of aerosols onto climatic effects, researchers in the security sector are looking for ways to remotely detect dangerous substances from safe distances. Additionally, due to the Corona pandemic, the detection of bioaerosols has gained significant relevance in sectors like public health, transportation, and aviation. As a result, more accurate, i.e. sensitive and specific, measurement equipment is needed. Here we present the design concept for a new sensor system designed to measure thin bioaerosol clouds. For the detection air samples are excited with laser light to generate a signal based on laser induced fluorescence. The fluorescence is collected in an integration sphere to optimize signal. Inside the integration sphere multiple sensors are placed, each combined with a filter to exclude all signals not belonging to a certain, agent specific wavelength interval. Through the intelligent combination of spectral intervals, a specific characteristic of the studied air sample is measured. Based on the measured characteristic a classification is performed to determine the category of the sample. Development aims at testing indoor air quality in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article describes the research work in search of an optimized solution for the measurement of compressive force using the detection of the intensity of the optical power coupled into the optical fiber. In the experimental part of the research a product realized by 3D printing was used the outer case of which was made of FLEXFILL 98A material and the inner part was formed by a three-part PETG layer while the middle sensory part was changeable. This model was used to test different shapes of deformation elements in the variable part to find suitable configurations of the deformation plate. A standard 50/125 μm multimode graded index optical fiber was placed in the sensory part. It can be assumed that the results of this research can be used for the design of sensors based on the detection of changes in optical power intensity
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.