To improve cutting-edge deep learning techniques for more relevant defense applications, we extend our wellestablished port monitoring ATR techniques from generic ship classes to a pair of newly curated datasets: aircraft carriers and other military ships. We explore several techniques for data augmentation and splits to represent different deployment regimes, such as revisiting known military ports and new observations of never-before-seen ports and ships. We see reliable results (F1 <0.9) detecting and classifying aircraft carriers by type–and by proxy, nationality–as well as encouraging preliminary results (mAP <0.7) detecting and differentiating military ships by sub-class.
We present a method for monitoring rapidly urbanizing areas with deep learning techniques. This method was generated during participation in the SpaceNet7 deep learning challenge and utilizes a U-Net architecture for semantically labeling each frame in a time series of monthly images that span roughly two years. The image sequences were collected over one hundred rapidly urbanizing regions. We discuss our network architecture and post processing algorithms for combining multiple semantically labeled frames to provide object level change detection.
For many intelligence sources, reliable independent algorithms exist for interpreting the data and reporting relevant information to analysts. However, achieving the necessary cross-source data fusion from these sources and algorithmic outputs to achieve true sensemaking can be challenging. This is especially true at the individual object level, given the sources' highly variable spatiotemporal resolutions and uncertainties. We have developed a framework for merging automatic target recognition (ATR) algorithms and their outputs to produce a sensor-agnostic means of object level change detection to establish the necessary patterns-of-life for big picture sensemaking, activity-based intelligence, and autonomous decision making.
Change detection between two temporal scenes of overhead imagery is a common problem in many applications of computer vision and image processing. Traditional change detection techniques only provide a pixel level detail of change and are sensitive to noise and variations in images such as lighting, season, perspective. We propose a deep learning approach that exploits a segmentation detector and classifier to perform object level change detection. This allows us to create class level segmentation masks of a pair of images collected from the same location at different times. This pair of segmentation masks can be compared to detect altered objects, providing a detailed report to a user on which objects in a scene have changed.
Deep learning-based classification of objects in overhead imagery is a difficult problem to solve due to low to moderate available resolution as well as wide ranges of scale between objects. Traditional machine learning object classification techniques yield sub-optimal results in this scenario, with new techniques developed to optimize performance. Our Lockheed Martin team has developed data pre-processing techniques such as context masking and uniform rotation which improve classifier performance in this application. Additionally, we have demonstrated that shallow classifier models perform at least as well as deeper models in this paradigm, allowing for fast training and inference times.
Many problems in defense and automatic target recognition (ATR) require concurrent detection and classification of objects of interest in wide field-of-view overhead imagery. Traditional machine learning approaches are optimized to perform either detection or classification individually; only recently have algorithms expanded to tackle both problems simultaneously. Even highly performing parallel approaches struggle to disambiguate tightly clustered objects, often relying on external techniques such as non-maximum suppression. We have developed a hybrid detection-classification approach that optimizes the segmentation of closely spaced objects, regardless of size, shape, and object diversity. This improves overall performance for both the detection and classification problems.
Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) for wide-area search is a difficult problem for both classic techniques and state-of-the-art approaches. Deep Learning (DL) techniques have been shown to be effective at detection and classification, however they require significant amounts of training data. Sliding window detectors with Convolutional Neural Network (CNN) backbones for classification typically suffer from localization error, poor compute efficiency, and need to be tuned to the size of the target. Our approach to the wide-area search problem is an architecture that combines classic ATR techniques with a ResNet-18 backbone. The detector is dual-stage and consists of an optimized Constant False Alarm Rate (CFAR) screener and a Bayesian Neural Network (BNN) detector which provides a significant speed advantage over standard sliding window approaches. It also reduces false alarms while maintaining a high detection rate. This allows the classifier to run on fewer detections improving processing speed. This paper’s focus tests out the BNN and CNN components of HySARNet through experiments to determine their robustness to variations in graze angle, resolution, and additive noise. Synthetic targets are also experimented with for training the CNN. Synthetic data has the potential to allow for the ability to train on hard to find targets where little or no data exists. SAR simulation software and 3D CAD models are used to generate the synthetic targets. This paper focuses on the utilization of the Moving and Stationary Target Acquisition (MSTAR) dataset which is the widely used, standard data set for SAR ATR publications.
For functional neuroimaging, existing small-animal diffuse optical tomography (DOT) systems either do not provide adequate temporal sampling rates, have sparse spatial sampling, or have limited three-dimensional fields of view. To achieve adequate frame rates (1-10 Hz), we have constructed a system using sCMOS detection-based DOT, with asymmetric measurements, with many (>10,000) detectors and fewer (<100) structured illumination patterns (using digital micromirror devices: DMDs). The system employs multiple views, involving multiple cameras and illuminators, to provide a three-dimensional field of view. To coregister the measurements with the mouse head anatomy, we developed a surface profiling method in which point illumination patterns are scanned over the mouse head and combined with calibration data to create three-dimensional point clouds and meshes representing the head. We applied this method to a 3D-printed figurine, and the resulting mesh had surface vertices whose positions deviated 0.4 ± 0.2 mm (mean ± SD) from the original "ground truth" mesh that had been employed to 3D-print the figurine. To evaluate the imaging system's resolution, field of view, and sensitivity versus depth, we placed simulated activations at different depths within a tissue model of a real mouse head imaged with our surface profiling method. Results indicate that this imaging system is sensitive to absorption changes at depths of >3 mm. In addition, a partial (one-camera, one-illuminator) version of the system successfully imaged neural activations evoked by forepaw stimulation of a live mouse.
Conventional two-photon microscopy (TPM) is capable of imaging neural dynamics with subcellular resolution, but it is limited to a field-of-view (FOV) diameter <1 mm. Although there has been recent progress in extending the FOV in TPM, a principled design approach for developing large FOV TPM (LF-TPM) with off-the-shelf components has yet to be established. Therefore, we present a design strategy that depends on analyzing the optical invariant of commercially available objectives, relay lenses, mirror scanners, and emission collection systems in isolation. Components are then selected to maximize the space-bandwidth product of the integrated microscope. In comparison with other LF-TPM systems, our strategy simplifies the sequence of design decisions and is applicable to extending the FOV in any microscope with an optical relay. The microscope we constructed with this design approach can image <1.7-μm lateral and <28-μm axial resolution over a 7-mm diameter FOV, which is a 100-fold increase in FOV compared with conventional TPM. As a demonstration of the potential that LF-TPM has on understanding the microarchitecture of the mouse brain across interhemispheric regions, we performed in vivo imaging of both the cerebral vasculature and microglia cell bodies over the mouse cortex.
Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source–detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few (<50) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system’s flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source–detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.