Makeup, derived from human`s pursuit of beauty, is widely accepted by the public. Despite its popularization, there are little effect been made to tackle the makeup face verification challenges. Aiming to promote existing verification system to accept or reject the claimed identity of a person with makeup in an image, a makeup robust face verification framework is proposed based upon a generative adversarial network. The proposal synthesizes non-makeup face images from makeup images for further verification. Specifically, a patchwise contrastive loss is introduced in the generative model to narrow the distance between makeup and non-makeup images. The challenge in the state-of-the-art is the employment of a pre-specified and hand-designed loss function to measure the performance, which is not the case in the proposal. Experimental results demonstrate that the proposal generates non-makeup faces with few artifacts and achieves 96.3% accuracy on Dataset1 in face verification, which is at least 0.8% better than some well discussed models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.