The generation of synthetic multispectral satellite images has not yet reached the quality level achievable in other domains, such as the generation and manipulation of face images. Part of the difficulty stems from the need to generate consistent data across the entire electromagnetic spectrum covered by such images at radiometric resolutions higher than those typically used in multimedia applications. The different spatial resolution of image bands corresponding to different wavelengths poses additional problems, whose main effect is a lack of spatial details in the synthetic images with respect to the original ones. We propose two generative adversarial networks-based architectures explicitly thought to generate synthetic satellite imagery by applying style transfer to 13-band Sentinel-2 level1-C images. To avoid losing the finer spatial details and improve the sharpness of the generated images, we introduce a pansharpening-like approach, whereby the spatial structures of the input image are transferred to the style-transferred images without introducing visible artifacts. The results we got by applying the proposed architectures to transform barren images into vegetation images and vice versa and to transform summer (res. winter) images into winter (res. summer) images, which confirm the validity of the proposed solution.
Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images.
Generative Adversarial Networks (GAN) have been used for both image generation and image style translation. In this paper, we aim to apply GANs to multispectral satellite image. For the image generation, we take advantage of the progressive GAN training methodology, that is purposely modified to generate multi-band 16 bits satellite images that are similar to a Sentinel-2 level-1C product. The generated images that we obtained imitate closely the spectral signatures of the kind of terrain in the images, as it can be seen by comparing typical spectral view between synthetic and natural images. Furthermore, we consider the recent use of GAN architectures for transferring the style of the images and apply them to perform land-cover transfer of satellite images. Specifically, we used the unpaired style transfer method to modify images that are dominant in vegetation land cover into images that are dominated by bare land cover and vice versa. The land-cover transfer via GANs gives very promising results and the visual quality for the transferred images is also satisfactory, showing that the land-cover transfer is an easier task compared to the GAN generation from scratch. Especially, results are good when the target domain is bare land, in which the visual quality for the transferred images is also very good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.