Image inpainting attempts to fill the missing areas of an image with plausible content that is visually coherent with the image context. Semantic image inpainting has remained a challenging task even with the emergence of deep learning-based approaches. We propose a deep semantic inpainting model built upon a generative adversarial network and a dense U-Net network. Such a design helps achieve feature reuse while avoiding feature explosion along the upsampling path of the U-Net. The model also uses a composite loss function for the generator network to enforce a joint global and local content consistency constraint. More specifically, our new loss function combines the global reconstruction loss characterizing the semantic similarity between the missing and known image regions with the local total variation loss characterizing the natural transitions among adjacent regions. Experimental results on CelebA-HQ and Paris StreetView datasets have demonstrated encouraging performance when compared with other state-of-the-art methods in terms of both quantitative and qualitative metrics. For the CelebA-HQ dataset, the proposed method can more faithfully infer the semantics of human faces; for the StreetView dataset, our method achieves improved inpainting results in terms of more natural texture transitions, better structural consistency, and enriched textural details.
The goal of this paper is to numerically simulate and analyze the aero-optic effects caused by the hyper-speed turbulence
fields surrounding the aircraft under different flight conditions, and to characterize them with the associated optical
transfer functions. First, analysis and computation of the aero-optic effects under different flight conditions have been
addressed, where the parameters characterizing the hyper-speed turbulence field were obtained by solving its N-S
equations via CFD methods. The infrared ray trajectories passing through the flow field with a non-homogeneous
distribution of the refraction indices were acquired using the gradient index ray-tracing method, and the transfer function to
represent the aero-optic effects was derived considering the principles of Fourier optics. The simulation results showed that
the aero-optic transfer function is characterized as a low-pass filter of nonlinearly varying phases, which results the
blurring and shifting of the objects in the acquired images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.