PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
In this talk, I will discuss our recent work on human activity recognition employing learning with less labels. In particular, I will present our work employing Semi-supervised learning (SSL), self-supervise learning and zero-short learning. First, I will present our Uncertainty-aware Pseudo-label Selection (UPS) method for semi-supervised learning, where the goal is to leverage a large unlabeled dataset alongside a small, labeled dataset. Next, I will present self-supervised method, TCLR: Temporal Contrastive Learning for Video Representations, which does not require labeled data. Finally, I will present Pairwise-Similarity Zero-shot Action Recognition (PS-ZAR) method, where the goal is to classify action classes which were not previously seen during training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This presentation is on Machine vision across the spectra: from visible light to thermal infrared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spiking neural networks (SNNs) extend upon traditional artificial neural networks (ANNs) by incorporating increased biological fidelity. For example, this includes features such as event-driven operation, sparsity, spatial/temporal functionality, parallelism, and collocating processing and memory. These features can translate into efficient computing hardware design, and consequently SNNs offer potential advantages for SAR ATR.
Here we provide a wide exploration into several SNN approaches, both for algorithms and computing hardware. Using the MSTAR and SAMPLE benchmark datasets, we develop SAR ATR networks comparing SNN computational complexity tradeoffs and analyzing how respective neuromorphic architectural choices impact spiking neural ATR performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning methods have exploded to enhance imagery-based target recognition, scene observation, and context analysis. Image fusion methods consist of many applications, especially when the image modalities are collected simultaneously such as with electro-optical and infrared imagers. When the modalities are collected from different platforms, methods of image fusion require more care for image registration, but with the advances in deep learning; data analysis can minimize the impact of varying operating conditions (e.g., sensor, environment, target). One example or importance is that of fusing electro-optical (EO) and synthetic aperture radar (SAR). This paper reviews methods in EO/SAR fusion and assesses the current methods of EO/SAR in image fusion, machine learning, and deep learning. Prior work in EO/SAR imagery had limited data collections, but machine learning was applied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for monitoring rapidly urbanizing areas with deep learning techniques. This method was generated during participation in the SpaceNet7 deep learning challenge and utilizes a U-Net architecture for semantically labeling each frame in a time series of monthly images that span roughly two years. The image sequences were collected over one hundred rapidly urbanizing regions. We discuss our network architecture and post processing algorithms for combining multiple semantically labeled frames to provide object level change detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.