The use of camouflage is widespread in the biological domain, and has also been used extensively by armed forces around the world in order to make visual detection and classification of objects of military interest more difficult. The recent advent of ever more autonomous military agents raises the questions of whether camouflage can have a similar effect on autonomous agents as it has on human agents, and if so, what kind of camouflage will be effective against such adversaries. In previous works, we have shown that image classifiers based on deep neural networks can be confused by patterns generated by generative adversarial networks (GANs). Specifically, we trained a classifier to distinguish between two ship types, military and civilian. We then used a GAN to generate patterns that, when overlaid on parts of military vessels (frigates), made the classifier confuse the modified frigates with civilian vessels. We termed such patterns "adversarial camouflage" (AC) since these patterns effectively camouflage the frigates with respect to the classifier. The type of adversarial attack described in our previous work is a so-called white box attack. This term describes adversarial attacks that are devised given full knowledge of the classifier under attack. This is as opposed to black box attacks, which describe attacks on unknown classifiers. In our context, the ultimate goal is to design a GAN that is capable of black box attacks, in other words: a GAN that will generate AC that has effect across a wide range of neural network classifiers. In the current work, we study techniques to improve the robustness of our GAN-based approach by investigating whether a GAN can be trained to fool a selection of several neural network-based classifiers, or reduce the confidence of the classifications to a degree which makes them unreliable. Our results indicate that it is indeed possible to weaken a wider range of neural network classifiers by training the generator on several classifiers.
Different types of imaging sensors are frequently employed for detection, tracking and classification (DTC) of naval vessels. A number of countermeasure techniques are currently employed against such sensors, and with the advent of ever more sensitive imaging sensors and sophisticated image analysis software, the question becomes what to do in order to render DTC as hard as possible. In recent years, progress in deep learning, has resulted in algorithms for image analysis that often rival human beings in performance. One approach to fool such strategies is the use of adversarial camouflage (AC). Here, the appearance of the vessel we wish to protect is structured in such a way that it confuses the software analyzing the images of the vessel. In our previous work, we added patches of AC to images of frigates. The paches were placed on the hull and/or superstructure of the vessels. The results showed that these patches were highly effective, tricking a previously trained discriminator into classifying the frigates as civilian. In this work we study the robustness and generality of such patches. The patches have been degraded in various ways, and the resulting images fed to the discriminator. As expected, the more the patches are degraded, the harder it becomes to fool the discriminator. Furthermore, we have trained new patch generators, designed to create patches that will withstand such degradations. Our initial results indicate that the robustness of AC patches may be increased by adding degrading flters in the training of the patch generator.
The use of different types of camouflage is a longstanding technique employed by armed forces in order to avoid detection, classification or tracking of objects of military interest. Typically, the use of such camouflage is intended to fool human observers. However, in future battle theaters one must expect to face weapons that are ’artificially intelligent’ in some way, and the question then arises as to whether the same types of camouflage will be effective against such weapons. An equally important question is if it is possible to design camouflage in order to specifically confuse ’artificially intelligent’ adversaries and what such camouflage might look like. It is this latter question that is the object of the study reported here. In particular, we consider whether carefully designed patterns of camouflage will have a detrimental effect on the performance of neural networks trained to distinguish among different ship classes. We train a neural network to distinguish between different types of military and civilian vessels and specifically require the network to determine whether the vessel is military or civilian. We then use this network to train a second network, a generative adversarial network, that will generate patterns to overlay on parts of the vessels in such a way as to thwart the performance of the first network. We show that such adversarial camouflage is very effective in confusing the original classification network.
Infrared (IR) imagery is frequently used in security/surveillance and military image processing applications. In this article we will consider the problem of outlining military naval vessels in such images. Obtaining these outlines is important for a number of applications, for instance in vessel classification.
Detecting this outline is basically a very complex image segmentation task. We will use a special neural network for this purpose. Neural networks have recently shown great promise in a wide range of image processing applications, image segmentation is no exception in this regard. The main drawback when using neural networks for this purpose is the need for substantial amounts of data in order to train the networks. This problem is of particular concern for our application due to the difficulty in obtaining IR images of military vessels.
In order to alleviate this problem we have experimented with using alternatives to true IR images for the training of the neural networks. Although such data in no way can capture the exact nature of real IR images, they do capture the nature of IR images to a degree where they contribute substantially to the training and final performance of the neural network.
We report on the development and application of a random forest regressor that not only identifies but also estimates the relative concentrations of substances (one explosive and two simulants), both in one-substance and two-substance samples. Performance of the regressor is quantified using Receiver Operating Characteristics and the performance is contrasted with that of a simple Spectral Angle Mapping technique that worked well on single-substance samples [1-3].
Many documents contain (free-hand) underlining, "COPY" stamps, crossed out text, doodling and other "clutter" that
occlude the text. In many cases, it is not possible to separate the text from the clutter. Commercial OCR solutions
typically fail for cluttered text. We present a new method for finding the clutter using path analysis of points on the
skeleton of the clutter/text connected component. This method can separate the clutter from the text even for fairly
complex clutter shapes.
Even with good localization of occluding clutter, it is difficult to use feature-based recognition for occluded characters,
simply because the clutter affects the features in various ways. We propose a new algorithm that uses adapted templates
of the font in the document that can be used for all forms of occlusion of the character. The method finds the simulated
localization of the corresponding clutter in the templates and compares the unaffected parts of the templates and the
character. The method has proved highly successful even when much of the character is occluded. We present examples
of clutter localization and character recognition with occluded characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.