We present a method for monitoring rapidly urbanizing areas with deep learning techniques. This method was generated during participation in the SpaceNet7 deep learning challenge and utilizes a U-Net architecture for semantically labeling each frame in a time series of monthly images that span roughly two years. The image sequences were collected over one hundred rapidly urbanizing regions. We discuss our network architecture and post processing algorithms for combining multiple semantically labeled frames to provide object level change detection.
Many problems in defense and automatic target recognition (ATR) require concurrent detection and classification of objects of interest in wide field-of-view overhead imagery. Traditional machine learning approaches are optimized to perform either detection or classification individually; only recently have algorithms expanded to tackle both problems simultaneously. Even highly performing parallel approaches struggle to disambiguate tightly clustered objects, often relying on external techniques such as non-maximum suppression. We have developed a hybrid detection-classification approach that optimizes the segmentation of closely spaced objects, regardless of size, shape, and object diversity. This improves overall performance for both the detection and classification problems.
Estimating building height from satellite imagery is important for digital surface modeling while also providing rich information for change detection and building footprint detection. The acquisition of building height usually requires a LiDAR system, which is not often available in many satellite systems. In this paper, we describe a building height estimation method that does not require building height annotation. Our method estimates building height using building shadows and satellite image metadata given a single RGB satellite image. To reduce the data annotation needed, we design a multi-stage instance detection method for building and shadow detection with both supervised and semi-supervised training. Given the detected building and shadow instances, we can then estimate the building height with satellite image metadata. Building height estimation is done by maximizing the overlap between the projected shadow region given a query height and the detected shadow region. We evaluate our method on the xView2 and Urban Semantic 3D datasets and show that the proposed method achieves accurate building detection, shadow detection, and height estimation.
Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) for wide-area search is a difficult problem for both classic techniques and state-of-the-art approaches. Deep Learning (DL) techniques have been shown to be effective at detection and classification, however they require significant amounts of training data. Sliding window detectors with Convolutional Neural Network (CNN) backbones for classification typically suffer from localization error, poor compute efficiency, and need to be tuned to the size of the target. Our approach to the wide-area search problem is an architecture that combines classic ATR techniques with a ResNet-18 backbone. The detector is dual-stage and consists of an optimized Constant False Alarm Rate (CFAR) screener and a Bayesian Neural Network (BNN) detector which provides a significant speed advantage over standard sliding window approaches. It also reduces false alarms while maintaining a high detection rate. This allows the classifier to run on fewer detections improving processing speed. This paper’s focus tests out the BNN and CNN components of HySARNet through experiments to determine their robustness to variations in graze angle, resolution, and additive noise. Synthetic targets are also experimented with for training the CNN. Synthetic data has the potential to allow for the ability to train on hard to find targets where little or no data exists. SAR simulation software and 3D CAD models are used to generate the synthetic targets. This paper focuses on the utilization of the Moving and Stationary Target Acquisition (MSTAR) dataset which is the widely used, standard data set for SAR ATR publications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.