Traditional facial recognition techniques often struggle to balance accuracy with model complexity. High accuracy typically demands intricate models, slowing recognition speeds on devices such as smartphones. Conversely, faster methods often sacrifice accuracy. We introduce a lightweight deep convolutional generative adversarial network (LW-DCGAN), designed specifically to address the challenges of occluded face recognition. By simplifying the network architecture and employing efficient feature extraction techniques such as transpose convolution, batch normalization, feature pyramid networks, and attention modules, we enhance both hierarchical sampling and contextual relevance. Furthermore, |
1.IntroductionWith the advancement of artificial intelligence and computer vision, face recognition technology has become widely applicable across various fields. Addressing the challenge of efficiently and accurately recognizing face occlusions—caused by factors such as masks and glasses—is crucial due to its impact on feature extraction, data acquisition, and computational complexity. The generative adversarial network (GAN), a deep learning model featuring a generator and a discriminator, offers a solution by fostering an iterative learning process among these components. This method aims to enhance the generator’s output to more closely resemble real data distributions.1 The deep convolutional generative adversarial network (DCGAN) advances this concept by replacing fully connected layers with convolutional ones, thereby stabilizing and simplifying the network architecture for quicker convergence.2 This adaptation significantly improves the feasibility of accurately and efficiently addressing occluded face recognition challenges. Specifically, DCGAN has played a pivotal role in enhancing generative capabilities for occluded face recognition, allowing for the creation of high-quality images. This capability is critical for developing more sophisticated recognition models. In addition, it effectively learns to capture the nuances of occlusion patterns, enhancing the realism of generated images and refining the discriminator’s ability to detect these features.3 Moreover, DCGAN demonstrates effectiveness in mitigating common occlusions, such as masks and glasses, thereby preserving facial feature integrity.4 This paper introduces a lightweight DCGAN (LW-DCGAN) specifically optimized for the challenges of occluded face recognition, with a focus on efficiency and accuracy suitable for mobile devices. The main contributions are as follows:
The paper is outlined as follows: Section 2 reviews the related research. Section 3 presents the LW-DCGAN architecture and algorithmic details. Section 4 explains the experimental setups and analyses. Section 5 discusses the implications of our findings. Section 6 concludes the paper. 2.Related Work2.1.Advances in Occluded Face Recognition AlgorithmsRecent advancements in occluded face recognition have significantly improved the robustness and accuracy of recognition algorithms under challenging conditions. The sparse representation classification (SRC) method, introduced by Wright et al.,7 became a foundational approach for occluded face recognition. By employing norm regression, SRC derived sparse coefficients, which proved effective in classifying occluded faces. It attained recognition rates of 98.1% on the Extended Yale dataset and 94.1% on the augmented reality (AR) face database. In 2011, Zhang et al.8 proposed the collaborative representation method (CRC), which utilized least squares for coefficient estimation, thereby reducing computational complexity. CRC demonstrated recognition rates exceeding 90% on various experimental datasets. Ou et al.9 introduced a sparse representation-based classification (SSRC) method that incorporated an occlusion dictionary. SSRC achieved recognition rates of 97.2% on the Extended Yale B and 95.3% on the AR datasets for non-occluded faces, whereas for occluded faces, the rates were 95.8% and 92.7%, respectively. Zheng et al.10 devised an iterative robust coding technique using a Laplacian–Uniform hybrid approach, which ensured high recognition accuracy even under challenging conditions such as occlusion and pixel damage. Despite these improvements, traditional methods remained limited to “shallow” facial features, often missing finer details. In recent years, the rapid advancement of deep learning has ushered in significant progress in occluded face recognition. Mundial et al.11 leveraged supervised learning techniques to identify masked faces, achieving an accuracy rate of up to 97% through facial features extracted by deep neural networks. Vu et al.12 combined deep learning with local binary pattern features to capture information from the eyes, forehead, and eye sockets of masked faces. These features were then merged with those learned from a face detector, creating a unified framework for masked face recognition. Their system recorded an -score of 87% on the COMASK20 dataset and 98% on the Essex dataset. Montero et al.13 engineered an end-to-end face recognition model based on the ArcFace architecture, incorporating data enhancement and dynamic dataset mixing. This approach resulted in an average accuracy of 98% in recognizing masked faces. Hariri14 proposed an efficient method tailored for recognizing masked faces during the coronavirus disease 2019 (COVID-19) pandemic. The approach employed VGG-16, AlexNet, and ResNet-50 for feature extraction and quantization, followed by a multi-layer perceptron (MLP) classifier. On the real-world masked face recognition dataset (RMFRD), VGG-16 achieved a recognition rate of 91.3%, ResNet-50 reached 89.5%, and AlexNet secured 86.6%. Golwalkar and Mehendale15 introduced FaceMaskNet-21, a neural network optimized for masked face recognition using multiple convolutional and fully connected layers. Validated on various datasets, the network demonstrated 88.92% accuracy with an execution time of under 10 ms. Zhang et al.16 developed a lightweight occluded face recognition model based on MobileNetV2. They replaced the average pooling in the attention module with a depth-wise separable convolution and integrated an improved dual attention module. Their model achieved accuracies of 90% and 91% on the mask-labeled faces in the wild (LFW) and mask-AgeDB datasets, respectively. Huang et al.17 proposed a progressive learning loss for face recognition (PLFace) method, implementing a progressive training strategy for deep face recognition. PLFace adaptively adjusted the weights of masked and unmasked samples at different training stages. Experiments revealed an average accuracy of 77% on the RMFRD dataset, 99.7% on the LFW dataset, and 94% on the IJB-C 1:1 validation. Ge et al.18 introduced a convolutional visual self-attention network (CVSAN) that combined local convolutional features with self-attention for long-range dependencies. On the Masked VGGFace2 dataset, CVSAN surpassed ArcFace, increasing accuracy by 0.8% on LFW and boosting TPR@FAR = 0.1% from 89.90% to 95.16% on SM-LFW. Cheng et al.19 implemented a face recognition system by Google (FaceNet) with transfer learning, using InceptionResNetV2, InceptionV3, and MobileNetV2. They incorporated a cosine annealing mechanism, which enhanced accuracy to across all three models. In recent studies, Zhong et al.20 introduced masked uncertainty fluctuation to measure sample identifiability by combining feature amplitude and variance uncertainty. The approach resulted in an average accuracy improvement ranging from 1.33% to 13.28%. Faruque et al.21 designed a lightweight convolutional neural network model that integrated batch normalization, dropout, and depth normalization to optimize overall performance. Compared with other deep learning models, this model achieved a high recognition accuracy of 97%. Sharma et al.22 developed a novel dual method for masked face detection using AntelopeV2, which utilized the RetinaFace detection algorithm and the ResNet100 convolutional neural network for face detection and embedding generation. Experimental results indicated an accuracy of . The research outlined above explored diverse approaches to occluded face recognition, addressing challenges through various models, data augmentation techniques, and feature fusion methods. However, several limitations and challenges persisted:
2.2.Progress on GANs for Facial RecognitionThe application of GANs in facial recognition tasks has gained significant attention, particularly in addressing challenges such as occlusion and low-quality images. Li et al.23 introduced a masked face recognition method using deocclusion distillation, which combined GAN and attention mechanisms to predict and reconstruct facial features. Experiments demonstrated that this approach improved accuracy by 1.3% over VGGFace and 0.2% over VGGFace2. Fu et al.24 presented a GAN-based unsupervised low-light image enhancement network with an attention module to improve image quality. Using an autoencoder, the method adapted enhancement across regions, highlighting details and reducing noise. It achieved a peak signal-to-noise ratio (PSNR) of 21.523 and a structural similarity index measure (SSIM) of 0.812 on the paired normal/lowlight images (PNLI) and low-light (LOL) test sets. Chen et al.25 enhanced face detection with an improved Xception model incorporating a local GAN. By replacing standard convolutions with inception blocks using dilated convolutions, the model effectively captured multi-scale features and achieved over 90% accuracy in detecting small-area faces. Zhang et al.26 developed a domain embedding GAN for face repair, integrating three types of face domain knowledge into a hierarchical variational autoencoder to guide the repair process. Experiments showed that domain embedded generative adversarial network (DE-GAN) surpassed leading image inpainting methods on CelebA and CelebA-HQ datasets, achieving SSIM scores of 0.893 and 0.895 and PSNR scores of 26.132 and 26.208, respectively. Lin et al.27 proposed a face de-identification method using GAN with a seven-layer network and two discriminators to boost feature extraction. Their model, evaluated through pixel loss, content loss, and adversarial loss, achieved over 90% recognition accuracy across various datasets. Zhang et al.28 introduced a GAN-based method that utilized contextual information to detect small-sized faces in complex environments. They generated virtual images with rich contextual information using GAN, fused these with real images, and created a comprehensive dataset for training deep learning models. Trigueros et al.29 devised a method for generating realistic training data using GANs. The approach combined synthetic images with real images and employed a multi-scale generator network architecture to capture more details and variations. Experiments on the wider facial detection in the wild (WIDER FACE) and face detection data set and benchmark (FDDB) datasets demonstrated the effectiveness of their method in recovering clear, high-resolution faces from small, blurry ones. Yang et al.30 constructed a semantic face restoration method using a dual discriminator DCGAN. Leveraging the VGG16 network to learn deep image features, their model achieved clearer and more realistic restoration results at the pixel level. Experiments on the CelebA dataset reported a PSNR of over 26 on most test datasets. Hong et al.31 proposed a two-stage face inpainting method. The first stage predicted facial landmarks to provide geometric and symmetry information for the GAN. In the second stage, the masked face image and corresponding facial feature points were input into a GAN to inpaint the missing areas. In experiments, their method achieved SSIM and PSNR scores of 0.9 and above 30, respectively, outperforming the light adaptive face image normalization. Huang et al.32 introduced Cycle Style GAN, which integrated the pre-trained Style-GAN 3 network into the Cycle-GAN architecture for near-infrared to visible (NIR-VIS) cross-domain learning. This model synthesized realistic visible images from near-infrared (NIR) images and achieved a rank 1 accuracy of 99.6% on the CASIA NIR-VIS 2.0 database. The research mentioned above illustrates various contributions of adversarial networks in the field of face recognition. However, they still face certain limitations and challenges:
2.3.Studies on Lightweight Network ArchitectureThe development of lightweight network architectures has become increasingly crucial for deploying deep learning models in resource-constrained environments, such as mobile devices. These architectures aim to reduce computational complexity and memory usage while maintaining high accuracy, particularly in tasks such as facial recognition where model efficiency is critical. Andrew and Menglong33 introduced the MobileNet model, which employed depth-wise separable convolution to reduce parameters and computational complexity. In face attribute classification, MobileNet achieved 88.7% mean average precision (mAP) with just 1% of the computation. It recorded a mAP of 19.3% in common objects in context (COCO) object detection and maintained 79.4% accuracy in face embedding tasks, all while significantly reducing model parameters. Sandler et al.34 presented MobileNetV2, featuring an inverted residual architecture and linear bottlenecks, which improved efficiency and representation. Compared with MobileNetV1, MobileNetV2 achieved the same 22.1% mAP on COCO object detection while cutting parameters by 16%, computational load by 38%, and runtime by 26%. Howard et al.35 further advanced this by proposing MobileNetV3, which integrated automated machine learning techniques to optimize its lightweight design. By employing network architecture search technology, the model automatically identified the optimal network architecture and introduced the efficient hard Swish activation function, further reducing computational overhead. Replacing SSDLite’s feature extractor with MobileNetV3 yielded a 27% speed improvement over MobileNetV2. Zhang et al.36 developed ShuffleNet, which used group convolution and channel shuffle to lower computational complexity and enhance information flow. On the ImageNet 2012 dataset, experiments showed that ShuffleNet reduced classification error by 3.1% and computational complexity by 45 MFLOPs compared with MobileNet. Ma et al.37 introduced ShuffleNetV2, which enhanced group convolution and optimized feature transfer mechanisms. By simplifying network design and improving feature transfer efficiency, ShuffleNetV2 achieved better computational and storage efficiency on practical hardware, making it well-suited for low-power environments. At 500 MFLOPs, ShuffleNetV2 was 58% faster than MobileNetV2, 63% faster than ShuffleNetV1, and 25% faster than Xception. Tan and Le38 presented EfficientNet, which employed compound scaling to optimize network width, depth, and resolution. This method ensured high efficiency across various resource constraints. On the ImageNet dataset, EfficientNet achieved a top accuracy of 97.1% with 66M parameters, surpassing MobileNetV2. Han et al.39 introduced GhostNet, a model that created additional feature maps through linear transformations of existing ones, significantly lowering computational demands. Experiments showed that GhostNet outperformed other networks on the ImageNet dataset with a top 1 accuracy higher than MobileNetV3 by while maintaining similar latency. Liu et al.40 developed a lightweight convolutional neural network for real-time semantic segmentation. The network used branched skip connections to capture contextual information and applied factorized dilated depth-wise separable convolutions to learn features from various scales. Despite its small size of 0.8M parameters, the network processed images at 60 FPS on a single RTX 2080Ti graphics processing unit (GPU). Chen et al.41 investigated a parallel design that combined MobileNet and Transformer, leveraging the strengths of MobileNet in local feature processing and Transformer in global interactions. In ImageNet classification tasks, this design outperformed MobileNetV3 within the low FLOP range of 25M to 500M FLOPs. Lyu et al.42 proposed a GPU-free real-time object detection method using a quantized single-shot multi-box detector (MobileNet-SSD), combining the lightweight design of MobileNet with the real-time detection of SSD. The quantized model significantly reduced computational and storage requirements. Experiments on a dataset of 22k monitoring images demonstrated a compression ratio of up to 21 times and a detection speed of nearly 25 FPS in a central processing unit (CPU)-only environment, with an mAP of 86.83% and a model size of 600 KB. Kavyashree and El-Sharkawy43 enhanced the MobileNet baseline architecture and reduced its size to 2.3 MB through techniques such as weight quantization, model pruning, and channel pruning, achieving an accuracy of 89.13%. Shi et al.44 introduced DPNet, a dual-path network for efficient object detection with lightweight self-attention. It used a self-attention module in the backbone to encode global interactions and a multi-input version in the feature pyramid network (FPN) for cross-resolution interactions. On the COCO dataset, DPNet achieved 29.0% AP on the test-dev set with only 1.14 GFLOPs and a model size of 2.27M for images. Jia et al.45 designed a recognition model based on an improved YOLOv7 combined with the lightweight MobileNetV3. MobileNetV3 was used for feature extraction, reducing the number of parameters while integrating the coordinate attention (CA) mechanism and the SIoU loss function to enhance accuracy. The model was tested on image datasets, achieving an accuracy of 92.3%. The research into lightweight network architectures is extensive but several challenges remain:
To address this issue, studying the integration of lightweight network architectures with DCGANs could enhance the stability of network training. This combination would improve the model’s ability to learn and interpret the complex features of occluded areas, thereby enhancing the accuracy of face recognition. 3.MethodsLW-DCGAN is a generative adversarial network specifically designed for addressing the challenge of occluded face recognition. It utilizes a streamlined convolutional network structure, enhanced with FPNs and attentional residual context modules (ARCM), to balance high-quality image generation with reduced model complexity. LW-DCGAN aims to improve the accuracy of recognizing occluded faces while maintaining a lightweight and efficient design, making it suitable for deployment in resource-constrained environments, such as on mobile devices for real-time applications. 3.1.Architecture of LW-DCGANThe core architecture of LW-DCGAN features a generator and a discriminator. The generator utilizes a lightweight convolutional network architecture, integrating an FPN and an ARCM to progressively generate high-resolution images of occluded faces. Through the use of transposed convolution layers and Tanh activation, the generator upscales features to produce the final image output. The discriminator, on the other hand, extracts facial features through convolutional layers and incorporates a CA module to enhance feature representation. Furthermore, an auxiliary face recognition module is cascaded within the discriminator to support face recognition tasks. This design is intended to enhance feature extraction efficiency and image quality, thereby ensuring high-accuracy face recognition even in the presence of occlusion. The network architecture of LW-DCGAN is illustrated in Fig. 1. 3.2.Generator Network3.2.1.Lightweight convolutional moduleIn the generator, the lightweight core component is the bottleneck block, which consists of a series of convolutional layers and activation functions. The first layer of the bottleneck block utilizes point-wise convolution to expand the number of channels, allowing for the extraction of more detailed feature information. The second layer employs depth-wise convolution to fuse features, integrating information from different levels. The final layer again uses point-wise convolution, this time for dimensionality reduction, which decreases the computational load and the number of parameters. In addition, squeeze-and-excitation modules are incorporated into the network. These modules dynamically adjust the importance of channels by learning the relationship between effective and ineffective weights, thereby enhancing the network’s expressive power and accuracy. Batch normalization (BN) is applied to stabilize and accelerate the training process, whereas rectified linear units (ReLU) are used as activation functions to introduce non-linearity. The detailed architecture of the bottleneck block is illustrated in Fig. 2. 3.2.2.FPN moduleThe core of the FPN architecture is the establishment of top-down and bottom-up feature pathways. The top-down path is responsible for up-sampling high-level feature maps to match the size of the feature maps in the bottom-up path. Conversely, the bottom-up path connects low-resolution, high-level semantic features with high-resolution, low-level semantic features through feature fusion. This architecture enables shallow features to receive guidance from deeper features, thereby enhancing the detection capabilities of the shallow features. The entire process constructs a comprehensive feature pyramid, allowing features of different scales and semantic levels to be effectively utilized. In our model, we select layers 2, 3, 5, 9, and 12 from the feature extraction architecture to extract image features and merge them as multi-scale feature information within the FPN. The process of building the FPN network is illustrated in Fig. 3. The input image undergoes a series of operations, including convolution and pooling, to form different scale feature layers C1, C2, C3, C4, and C5. These layers have an increasing down-sampling rate and decreasing resolution. The with three channels and a size of undergoes down-sampling, resulting in feature maps C1 to C5 with sizes , , , , and . The convolution with a kernel to change the feature channel number to 64 and the subsequent 2× up-sampling are applied, resulting in different-sized feature maps . P5 is calculated from C5, and P4 is derived by combining P5 and C4, and so forth. The size is , , , and . The final output feature map set is . 3.2.3.ARCM moduleTo enhance feature extraction in the unobscured regions, LW-DCGAN incorporates ARCM into the generator’s design. This module boosts feature extraction capabilities, mitigates gradient vanishing issues, and addresses generator instability. The ARCM consists of three key components: a context enhancement (CE) module, a CA module, and a spatial attention (SA) module. The CE module employs four parallel branches, each utilizing convolutional kernels, aimed at maintaining the same receptive field size while minimizing the overall number of parameters. In this context, the receptive field refers to the perceptual range of each neuron within the network in relation to the input data. Each branch generates a feature map, which are then concatenated to form an enriched context feature map. The concatenation operation is crucial in integrating diverse feature information from different branches, thereby enhancing the model’s expressive capabilities. Following the cascading of the CE with the CA and SA modules, a skip connection is introduced to further stabilize the network. The structural framework of ARCM is depicted in Fig. 4. The input to the ARCM is the output feature map collection from the FPN. Through deconvolution, the input feature maps are up-sampled by applying convolution, thereby unifying the size of all feature maps to a common resolution. These processed feature maps are then concatenated channel-wise, merging them into a single tensor. As the channel dimensions of P2, P3, P4, and P5 are already aligned, it is only necessary to concatenate them along the channel axis to form the tensor. This concatenation results in a context-enhanced feature map. The CA module then applies global average pooling (GAP) and global maximum pooling to the concatenated feature map, after which the results are passed through an MLP, which consists of a three-layer fully connected network. The final output is obtained through a normalized sigmoid function. The GAP operation is performed on the context-enhanced feature map to generate the channel attention feature map , as illustrated in Eq. (1) where denotes the value of the feature map at channel , height , and width . The symbol stands for the width of the feature map. Applying fully connected layers and a sigmoid activation function to results in the channel attention weight feature map , as shown in Eq. (2) where means the weight of the fully connected layer, and is the bias of the fully connected layer. represents the sigmoid function. The fully connected layer receives as input and performs a linear transformation of weights and biases. The channel attention weight feature map and CE are multiplied element by element to obtain the channel attention enhancement feature map . The workflow of the CA module is shown in Fig. 5.The SA module first performs channel-wise average pooling and channel-wise max pooling, followed by a two-dimensional convolution. The resulting feature maps are then passed through a sigmoid function. Afterward, the feature maps are concatenated along the channel dimension. Finally, a skip connection is introduced to prevent information loss and mitigate gradient vanishing. The architecture of the spatial attention module is illustrated in Fig. 6. By applying both an average pooling kernel and a max pooling kernel to the channel attention-enhanced feature map , we perform pooling operations in both horizontal and vertical directions. The feature map in the horizontal direction is obtained, as shown in Eq. (3) where symbolizes the value of the ’th element in channel and feature map .Simultaneously, in the vertical direction, we obtain the feature map by applying average pooling and max pooling as shown in Eq. (4) where stands for the number of elements considered along the height direction during pooling. denotes the value of the element in channel and feature map at the ’th column and ’th row.The width of the feature map is transposed to , resulting in the transposed feature map . Concatenated with the feature map , the feature layer is obtained. It is subject to a convolution operation, resulting in the feature layer . Then, batch normalization and ReLU activation function are applied to , resulting in the feature layer . is segmented, and the segmented parts are transposed and subject to a convolution operation. The resulting feature maps are then passed through the sigmoid function to generate the final spatial weights. The spatial weights in the height direction are represented by as shown in Eq. (5), and the spatial weights in the width direction are represented by as shown in Eq. (6) where symbolizes the position coordinate along the height direction of the feature map, represents the position coordinate along the width direction to the feature map, and stands for the pixel value of a channel in the feature map.Ultimately, the output of the ARCM module is , as shown in Eq. (7) where denotes the value at position in the original feature map.3.2.4.Network slimmingTo compress the generator network model into a more compact size for efficient deployment in resource-limited environments, such as mobile devices, our goal is to reduce model parameters, simplify architectures, and accelerate the inference process. We employed network slimming, a technique that enables us to achieve a streamlined network without introducing excessive complexity. Following this, fine-tuning is performed to restore the original performance, ensuring that the generator’s feature extraction and image generation capabilities are preserved. In this context, the original generator model in LW-DCGAN is referred to as the teacher generator, whereas the slimmed-down version is known as the student generator. In addition, we compute the scaling factor for the BN layer. This factor directly influences the output of the convolutional layer, and if its value is too small, the overall output of that channel will be biased toward lower values. This implies that, during the forward propagation process, the channel carries less information. The scaling factor can thus serve as a basis for determining the importance of convolutional channels, as shown in Eq. (8) where is a constant scaling parameter applied during normalization, which directly affects the output of the convolution layer. is the variance of the ’th channel. is a very small positive number. Use as the value of . is initialized to 1 and then gradually adjusted through the training process.In the optimization objective of the generator, the regularization term is introduced as a penalty to limit the numerical size of . An regularization term is added to the original loss function to penalize the sum of the absolute values of the scaling factors. The optimization goal is given by the following expression, as shown in Eq. (9) where represents the original generator loss function, stands for the total loss after introducing the regularization term, and is the coefficient of the regularization term, controlling the impact of the regularization term on the total loss. denotes different channel layers.Correction of the scaling factor is given by the following, as shown in Eq. (10) where the sign function converts elements greater than 0 to 1 and those less than 0 to , and stands for the learning rate. is the regularization strength that controls the intensity of scaling factor decay during the correction process.Subsequently, all are sorted, and the clipping threshold is selected according to the clipping ratio. Fifty percent of the channels are clipped, and the threshold reflects the middle value. Then, the scaling factor on the convolution channel of each layer is compared with the threshold. If it is greater than the threshold, the weight parameters of the layer are retained; otherwise, the parameters are set to zero. This step allows many convolution channels with weights of 0 to exist in the model, achieving the sparseness of the model. Record the number of channels and channel numbers that retain weights in each convolutional layer, and redesign the depth of each layer of the network based on the number of channels to obtain a new, more streamlined student generator model. The remaining weights in the old model are put into the new model to realize the construction of the student generator. In the streamlined network, the number of channels in the fully connected layers is reduced from 256 to 128. Consequently, the channel count in the transposed convolution layers is also decreased, leading to a reduction in the number of parameters in each layer and a decrease in the overall network complexity. Figure 7 illustrates the process of network slimming. In Fig. 7, represents the scaling factor greater than the threshold, whereas denotes the scaling factor smaller than the threshold. Although the algorithm has streamlined the number of channels and network layers, it retains the core functionality necessary to ensure that the network continues to effectively generate high-quality images. The structural details of the slimmed-down LW-DCGAN generator network are outlined in Table 1, which presents the output shapes, operations, convolution kernel sizes, strides, and activation functions utilized at each level. Table 1The slimmed-down generator network architecture.
3.3.Discriminator NetworkThe discriminator plays a crucial role in LW-DCGAN by distinguishing and categorizing images generated by the generator from real images, prompting continuous optimization by the generator. To ensure accurate feature capture from the input data, an attention mechanism is introduced into the discriminator, enhancing the original DCGAN design rather than employing lightweight processing. This attention mechanism improves the accuracy and effectiveness of the discriminator by enabling it to focus on important image regions. The network architecture of the LW-DCGAN discriminator is outlined in Table 2. Table 2Discriminator network architecture.
3.3.1.CA moduleFor each position , we calculate the attention weights between it and all positions. This attention weight is obtained through linear transformation and dot product operation of input features, as shown in Eq. (11) where is the normalization factor, and represents the number of channels in the feature map. , , and are weights learned through convolutional operations, used to map the input elements and . Once the attention weights are obtained, we add them to the original features to obtain the weighted feature as shown in Eq. (12)The features obtained in the discriminator are connected to a face recognition module. As the preceding convolutional layers have already extracted deep features of facial images, we added a GAP layer. This step performs dimensionality reduction on the feature map, reducing its spatial dimensions to 1 while preserving feature information for each channel. Subsequently, a global average pooling is applied to the weighted features, reducing the spatial dimensions to 1. The result of this global average pooling is given by the following, as shown in Eq. (13) where denotes the value of the weighted feature map at position and channel , and are the height and width of the weighted feature, respectively. illustrates the value of the channel in the resulting feature map after global average pooling, i.e., the weighted average across all positions for that channel.3.3.2.Face recognition moduleTo simplify the network architecture of the face recognition module and reduce the number of parameters, we utilize pooled features as the input for the face recognition module, directing them to the softmax layer without incorporating an additional fully connected layer. This streamlined design leverages deep features extracted before the discriminator, enabling the face detection module to perform its tasks without unnecessary complexity. The softmax layer, a standard output layer in deep learning neural networks for multi-class classification problems, transforms the network output into a probability distribution. Predicted probabilities for each category range between 0 and 1, with the sum across all categories equaling 1. During training, the output of the neural network passes through the softmax layer, yielding a predicted probability distribution, which is then compared with the true labels’ one-hot encoding. The cross-entropy loss function quantifies the dissimilarity between these two distributions, converting the difference between the network’s predictions and the actual labels into a scalar value. This scalar value serves as a metric for evaluating the accuracy of the model’s predictions. Through optimization algorithms such as gradient descent, the optimizer seeks to minimize the cross-entropy loss function, aligning the predictions more closely with the actual labels. This design not only preserves the integrity of feature extraction prior to the discriminator but also makes the entire network architecture more lightweight, making it suitable for deployment in resource-constrained environments. The architecture of the face recognition module is illustrated in Fig. 8. 3.4.Related Loss Functions3.4.1.Generator loss functionLW-DCGAN is designed for the effective generation and recognition of occluded faces. To achieve this, we employ multiple loss functions to guide the training of the generator. Initially, the mean squared error (MSE) loss is used to measure the pixel-level difference between the generated and target images, as shown in Eq. (14) where signifies the image generated by the generator from input noise vector , represents the corresponding target image for , and is the number of training samples.Second, we employ feature matching loss to assess the quality of generated images. Feature matching loss is achieved by comparing the feature representations of generated and real images at the intermediate layer of the discriminator. Feature matching loss is shown in Eq. (15) where illustrates the input noise vector, denotes the output features of discriminator given input , and stands for the generated output by generator given noise .Simultaneously, we introduce adversarial loss and use generative adversarial networks to improve the performance of the generator. The adversarial loss drives the generator to learn to generate more realistic images by comparing the output of the generator with the assessments made by the discriminator. The adversarial loss is shown in Eq. (16) To leverage segmentation label information, we incorporate a cross-entropy loss function into the generator’s loss for the segmentation task. Minimizing this loss effectively optimizes the generator’s ability to extract segmentation features and predict accurate segmentation results, thereby enhancing image generation using segmentation label information. The cross-entropy loss is presented in Eq. (17) where represents the true distribution of segmentation labels, indicating the actual segmentation results, whereas is the predicted distribution of the generator, representing the predicted probabilities for each category. denotes the number of classes.When the four loss functions are combined, the comprehensive loss function of the LW-DCGAN generator can be obtained. The comprehensive loss is shown in Eq. (18): where , , , and are weight parameters used to balance the contributions of various loss terms.3.4.2.Discriminator loss functionThe discriminator in LW-DCGAN employs both binary cross-entropy loss and gradient penalty loss. Binary cross-entropy loss serves to quantify the distinction between generated and real images. The objective of the discriminator is to accurately classify a generated image as false (0) or a real image as true (1). The binary cross-entropy loss is denoted as in Eq. (19) The gradient penalty loss is applied to enforce the constraint that the gradient norm of the discriminator approaches 1, thereby improving the training stability of the discriminator and the quality of the generated images. The gradient penalty loss is illustrated in Eq. (20) where stands for the gradient of the discriminator concerning the input . The overall loss function is a weighted sum of the and the .This combination is critical for ensuring that the discriminator network effectively distinguishes between real and fake images while maintaining stable gradients. The comprehensive loss function is expressed in Eq. (21) where is a hyperparameter that balances the contribution of the gradient penalty relative to the . By minimizing , the discriminator is trained to develop robust recognition capabilities for occluded faces.3.4.3.Recognition module loss functionIn the recognition module, we employ cross-entropy loss as the primary loss function to evaluate the classification performance of the model. This widely used loss function effectively gauges the disparity between the probability distribution generated by the model’s output and the actual labels. Our goal in this context is for the model to accurately categorize input face images into distinct classes, and cross-entropy loss plays a crucial role in assessing the classification accuracy of the model. At the output layer of the model, we utilize the softmax activation function to transform the initial model output into a class probability distribution. This activation function ensures that the sum of probabilities for all categories equals 1. Cross-entropy loss then computes the loss between this probability distribution and the distribution of the true labels. Specifically, for a given sample, assuming the raw output of the model is represented as , where denotes the score for the ’th category, and the actual labels are indicated as , where conveys the true label for the ’th category, the cross-entropy loss is shown in Eq. (22) where signifies the number of categories. is the actual label of the category. expresses the predicted probability for the ’th category after passing through the softmax function.3.5.Training Algorithm of LW-DCGANDuring the training process of LW-DCGAN, the generator is used to convert the real data images into the generated image . The task of the discriminator is to distinguish images and . is a random variable, and specifies the probability distribution of this random variable . expresses the prediction result of the discriminator on the real image, and is the prediction result of discriminator for the fake image. represents the number of samples in a mini-batch during each training iteration. represents the parameter set of the generator , whereas refers to the parameters in the discriminator . is the gradient of the loss function with respect to , whereas represents the gradient of the discriminator with respect to . is the current values of the weights in generator , whereas expresses the current weight parameters of the discriminator . defines the learning rate of the generator , whereas is the learning rate for the discriminator . Using , we compute the gradient of the generator loss function for each example , and then sum these gradients to get the total gradient of the generator loss function over the mini-batch. Similarly, calculates the gradient of the discriminator loss function for each sample and then sums these gradients to obtain the total gradient of the discriminator loss function on the mini-batch. This total gradient is used to update the weights of the discriminator to minimize the discriminator loss function. The training algorithm is shown in Table 3. Table 3Pseudo-code of LW-DCGAN training algorithm.
4.Experimental AnalysisThe assessment of LW-DCGAN involved a multi-faceted approach. First, we conducted an ablation experiment on LW-DCGAN to systematically investigate the impact of each algorithmic component on the model’s overall performance. Next, we designed generalization experiments to evaluate the model across both training and test datasets. These experiments aimed to assess the model’s ability to generalize to new, unseen data, ensuring robust performance in diverse scenarios beyond the training data. Finally, we replaced the generative adversarial network modules in LW-DCGAN with those of GAN, DCGAN, Wasserstein GAN (WGAN-GP), and image-to-image translation with a conditional adversarial network (Pix2Pix). Comparative experiments were then conducted to evaluate the performance of these models against LW-DCGAN. 4.1.Preprocessing of DatasetThe experimental dataset, CelebA-Mask, comprises over 24,000 face images from more than 4000 individuals, each meticulously annotated with detailed facial features such as hair, eyes, mouth, nose, and facial contours. CelebA-Mask, chosen for its multi-label segmentation properties, serves as an ideal foundation for this study.46 For training set A, 4000 original face images were selected from this dataset. To address the diverse types of occlusions found in real-world scenarios, 4000 unoccluded images were also included for training set B. During data preparation, variability was introduced by applying element-wise multiplication, adding randomly positioned and sized black occlusions to the original images in training set B. Throughout the training process, batches of data from both datasets A and B were randomly selected. Segmentation labels from dataset A were fed into the generator to enrich facial structural information, whereas pixel-level details from dataset B were directly used as input for the generator. A fivefold cross-validation approach was employed, dividing the entire dataset into five subsets. Four subsets were used for training, whereas the remaining subset served as the test set. This process was repeated five times, with each subset serving as the test set once, ensuring a comprehensive evaluation. The test set involved in each training iteration is referred to as testing set 1. Figure 9 displays samples from the dataset. To comprehensively evaluate the performance of LW-DCGAN in the occluded face recognition tasks, we additionally selected three widely used datasets as test sets. These three datasets are no longer involved in the training process and are only used as result tests. The Caltech occluded faces in the wild (COFW) dataset is specifically designed to study occluded faces, featuring numerous images with various occlusions such as hats, glasses, and hands, which facilitates testing the performance of the model under complex occlusion conditions.47 The LFW dataset includes images with occlusions such as glasses and hats and is primarily used to assess the model’s performance under natural occlusion conditions.48 The masked face recognition v2 (MFR2) dataset, introduced during the COVID-19 pandemic, addresses face recognition challenges with masks. It includes images with various mask types, such as medical masks, N95 masks, and cloth masks, as well as other occluders such as sunglasses and hats, simulating partially occluded facial scenes in real-world scenarios.49 Table 4 displays the varying occlusion ratios across the datasets. Table 4Data set distribution.
4.2.Experimental Setup and ParametersThe LW-DCGAN algorithm model developed in this study was trained using the TensorFlow 2.5.0 framework, with acceleration provided by an NVIDIA GeForce RTX 2080 Ti GPU. The training was conducted on a computer system equipped with an Intel(R) Core(TM) i5-11320H @ 3.20 GHz processor, Intel(R) UHD Graphics 630 adapter, 16 GB of memory, and a 64-bit operating system. The model was programmed in Python 3.8.3 using the PyCharm 2021 integrated development environment. The experimental environment offered ample computing resources and stable software support, ensuring the efficient training of the LW-DCGAN model and the achievement of accurate results. The parameter settings used in the experiment are detailed in Table 5. Table 5Experimental parameters.
To better utilize the face segmentation label information during the training process of training set A, an additional channel is added at the input layer to incorporate the segmentation labels corresponding to the facial images. A branch is then introduced to handle this segmentation label input, which includes a convolution layer for mapping label features that are subsequently fused with the main network. Considering the added task of predicting segmentation, a convolution prediction branch is inserted just before the output of the final transposed convolution layer. This branch has the same number of output channels as the segmentation label channel and is designed to extract segmentation feature maps. During the initialization phase, the additional channel parameters of the generator are set to a non-trainable state, effectively freezing these parameters. Throughout the training loop, by selectively enabling or restoring the training status of these additional channel parameters, we can flexibly control the extent to which the generator utilizes multi-label information when processing different dataset groups. For datasets that do not use multi-label segmentation, the focus remains on mean squared error loss, feature matching loss, and adversarial loss. 4.3.Ablation ExperimentAblation experiments help determine the relative importance of various components within the model. By comparing performance differences after removing specific components, we can identify which factors contribute the most to overall model performance. The evaluation metrics used in this experiment include accuracy, recall, SSIM, and PSNR. Accuracy, a commonly used metric in classification models, represents the proportion of correctly classified samples to the total number of samples. It is calculated as shown in Eq. (23) where TP denotes the number of samples correctly classified as positive, indicating the true positive count. FP represents the count of samples incorrectly classified as positive, despite being negatives in reality. TN refers to the number of samples correctly classified as negatives, i.e., true negatives. FN signifies the count of samples incorrectly classified as negatives, despite being positives in reality.Recall rate refers to the proportion of samples that the model correctly predicts as positive samples among all the samples that are positive. It is as shown in Eq. (24) The SSIM is employed to gauge the similarity between two images, with a value closer to 1 indicating higher similarity. It is represented by Eq. (25) where and represent two images, respectively. and illustrate the respective mean of the two images. and express the standard deviation of the two images. symbolizes the covariance of the two images. and are constants introduced for stability. PSNR is a metric used to measure image quality, and a higher value indicates better image quality. It is given by the following, as shown in Eq. (26) where depicts the mean square error, that is, the degree of difference between the two images. SSIM and PSNR measure the image generation capabilities of the generative adversarial network module in LW-DCGAN.The experiment is divided into three groups. In experiment group A, the full architecture of LW-DCGAN is maintained, including both the FPN and ARCM components. In experiment group B, the FPN is removed from LW-DCGAN while retaining ARCM, resulting in a network denoted as LW-DCGAN (ARCM). In experiment group C, ARCM is excluded whereas FPN is retained, creating the network referred to as LW-DCGAN (FPN). All three groups utilize the same training dataset and apply a cosine annealing learning rate decay strategy. Figure 10 illustrates the changes in each metric from 0 to 200 epochs across these ablation experiments. By comparing the performance indicators of LW-DCGAN, LW-DCGAN (ARCM), and LW-DCGAN (FPN), several valuable conclusions can be drawn regarding the roles of FPN and ARCM in LW-DCGAN. First, LW-DCGAN demonstrated the highest performance, with an accuracy rate of 88%, a recall rate of 90%, an SSIM index of 0.877, and a PSNR of 28.3 after 200 epochs. The metrics for LW-DCGAN (ARCM) declined compared with LW-DCGAN, showing a 6% decrease in accuracy, a 4% decrease in recall, a drop of 0.035 in SSIM, and a reduction of 0.6 in PSNR. This suggests that FPN has a significantly positive impact on the performance of image generation and classification tasks. In contrast, when ARCM was removed while retaining FPN in LW-DCGAN (FPN), accuracy decreased by 13%, recall by 10%, SSIM by 0.082, and PSNR by 0.95. These significant declines in metrics indicate that the attention mechanism plays a crucial role in the model’s performance. Overall, LW-DCGAN, incorporating both FPN and ARCM, achieved superior performance. 4.4.Generalization ExperimentIn the generalization experiment, we meticulously recorded the accuracy, recall, generator loss, discriminator loss, and changes in PSNR and SSIM for both the training set and testing set 1 throughout the iteration process. In addition, the final performance metrics of the model, including accuracy, recall, SSIM, and PSNR, were evaluated on widely used datasets such as COFW, LFW, and MFR2. This analysis primarily serves to assess how well the model generalizes to unseen data. Figure 11 illustrates the accuracy and recall of the model on the training set and testing set 1 during the iterative training process. As the number of epochs increased, both the training set accuracy and testing set 1 accuracy showed gradual improvement. Initially, at epoch 0, the accuracy was relatively low, but as training progressed, it steadily increased until reaching a saturation point. The accuracy on the training set rose from 0.20 to 0.88, whereas the accuracy on testing set 1, after some initial fluctuations, showed a significant upward trend, increasing from 0.15 to 0.85. Similarly, recall also improved for both datasets, with the training set stabilizing at 0.9 and testing set 1 at 0.87. This suggests that the LW-DCGAN model demonstrates a strong generalization ability in recognizing occluded faces, achieving high accuracy and recall on testing set 1. In addition, a comparison of accuracy and recall between the training set and testing set 1 indicates that the model did not exhibit significant signs of overfitting. The experiment further evaluates the LW-DCGAN model using both generator loss and discriminator loss. Figure 12 illustrates the generator loss. The data show a gradual decrease in both mean square error loss and feature matching loss as training progresses. On the training set, the generator’s loss decreases from an initial value of 1.47 to 0.93. This trend aligns with the characteristics of LW-DCGAN, indicating that the model gradually learns more effective generation and discrimination strategies during training. Meanwhile, adversarial loss fluctuates throughout the entire training process, reflecting the oscillations caused by the ongoing competition between the generator and discriminator. However, as shown in Fig. 12, adversarial loss exhibits an overall downward trend. To form the comprehensive loss function for the discriminator, we combine binary cross-entropy loss with gradient penalty loss. The resulting discriminator loss rates are recorded in Fig. 13, which illustrates an overall fluctuating downward trend in the loss rates for both the training set and testing set 1. The variation curves of PSNR and SSIM on the training set and testing set 1 are illustrated in Fig. 14. From the graph, it is evident that although PSNR experiences significant fluctuations, it shows an overall upward trend. Meanwhile, SSIM gradually increases in both the training set and testing set 1, indicating the progressive enhancement of image quality throughout the training process. To further verify the generalization ability of LW-DCGAN, we evaluated the performance of the model on COFW, LFW, and MFR2. Table 6 shows the comparison results of LW-DCGAN on the training set, testing set 1, COFW, LFW, and MFR2 in terms of accuracy, recall, SSIM, and PSNR. Table 6Performance of LW-DCGAN on different datasets.
On the COFW dataset, which includes complex occlusions, the model achieves an accuracy of 82% and a recall of 87%, with a PSNR of 27.6 and an SSIM of 0.867, demonstrating robustness in challenging conditions. On the LFW dataset, featuring natural occlusions, the model records a high accuracy of 94% and a recall of 91%, with a PSNR of 28.5 and an SSIM of 0.874. Finally, on the MFR2 dataset, which focuses on occlusions from masks, the model performs with an accuracy of 86%, a recall of 88%, a PSNR of 27.9, and an SSIM of 0.877. Overall, these results validate LW-DCGAN’s strong performance and adaptability across diverse occluded facial recognition tasks. 4.5.Comparative ExperimentThe performance of the LW-DCGAN algorithm was further evaluated through comparison experiments with other GAN variants, including GAN, DCGAN, WGAN-GP, and the Pix2Pix network. Comparing LW-DCGAN with GAN serves as a benchmark to assess its lightweight face recognition capabilities, given GAN’s foundational role in generative models. The comparison with DCGAN allows for an analysis of whether LW-DCGAN outperforms more complex deep convolutional architectures. WGAN-GP, which addresses issues such as training instability and mode collapse through improved loss functions, is compared with LW-DCGAN to evaluate the impact of a lightweight design on training stability. Finally, Pix2Pix, a conditional generative adversarial network for image translation, is compared with LW-DCGAN to determine whether the latter excels in occluded face recognition tasks. The parameter settings for each model are provided in Table 7. Table 7Comparison of adversarial network parameter designs.
To better compare the performance of LW-DCGAN in face recognition, we selected FaceNet, ArcFace, VGGFace, and SphereFace (based on ResNet-64) as benchmarks. FaceNet is recognized for its robustness in embedding space optimization using triplet loss, ArcFace for its superior inter-class separability through angular margins, VGGFace as a well-established benchmark for evaluating general recognition performance, and SphereFace for its enhanced angle-based inter-class separation. This evaluation helps position LW-DCGAN relative to these established models. The experimental parameters of the four models are shown in Table 8. To intuitively evaluate the performance of various networks in occluded face recognition, we primarily compare the models based on accuracy and recall. To assess the convergence and stability of different models and observe the variation of these metrics during the training process, the same training dataset and learning rate cosine annealing strategy are employed for each model. Figure 15 illustrates the variations in accuracy and recall for different models from epoch 0 to 200. In comparative experiments, LW-DCGAN demonstrates significant superiority in training accuracy, achieving a final accuracy of 88% and a recall rate of 90%. Among the GAN-related models, DCGAN and WGAN-GP perform relatively well, with accuracies of 85% and recall rates of 87% and 88%, respectively. However, GAN and Pix2Pix show poor performance. Among the face recognition models—FaceNet, ArcFace, VGGFace, and SphereFace (based on ResNet-64)—SphereFace outperforms the others, achieving an accuracy of 80% and a recall rate of 83%. In contrast, VGGFace performs the worst on datasets with severe occlusion. To further illustrate the advantage of LW-DCGAN in a network scale, we compare these models in terms of model memory, parameters, and inference time. All values are shown in Table 9. Table 8Comparison of face recognition model parameter designs.
Table 9Comparison of model size and inference time.
From the performance metrics listed in the table, it is evident that after the optimization, the memory usage of the LW-DCGAN generator model is reduced by 4.6 MB compared with the DCGAN, with a decrease of 1M in the number of parameters and a reduction of 0.07 s in inference time. 5.DiscussionIn this study, we introduced the lightweight deep convolutional generative adversarial network (LW-DCGAN), specifically designed to address the challenge of recognizing occluded faces. Our research included an extensive evaluation of LW-DCGAN’s capabilities through various experimental analyses, underscoring its potential for practical deployment. LW-DCGAN represents a significant advancement in occluded face recognition technology. It overcomes the limitations of earlier methods by employing a unique network architecture and innovative algorithms. The multi-layer architecture excels at extracting detailed features across different scales, improving the model’s ability to handle diverse and complex datasets. Importantly, the lightweight framework enhances efficiency compared with conventional deep learning models, reducing the dependency on high-end computational resources and enabling deployment on devices with limited processing power, which is crucial for real-time applications. However, deploying LW-DCGAN also presents challenges that need to be addressed to maximize its utility. The model’s performance is closely tied to the dataset used for training and testing. Although the CelebA-Mask dataset includes a wide range of occluded facial images, it does not cover all possible occlusion scenarios encountered in real-world settings. Although LW-DCGAN performs well with common occlusions, its effectiveness diminishes with rarer or more complex types. This limitation suggests the need for further robustness testing and the use of advanced data augmentation techniques to improve the model’s applicability across various occlusion conditions and severity levels. In addition, whereas LW-DCGAN is designed to be more efficient than traditional models, the computational resources required during training are still considerable, which could hinder its widespread adoption, particularly in resource-limited environments. 6.ConclusionThis paper presents LW-DCGAN, a specialized generative adversarial network developed for occluded face recognition. LW-DCGAN utilizes a streamlined convolutional network, enhanced with feature pyramids and attention mechanisms, to generate high-quality images while maintaining reduced model complexity. Our ablation studies confirmed that the FPN and ARCM components significantly enhance LW-DCGAN’s performance in recognizing occluded faces. Generalization tests further validated its effectiveness across multiple datasets, including COFW, LFW, and MFR2. Comparative experiments against GAN, DCGAN, WGAN-GP, Pix2Pix, and popular face recognition models such as FaceNet, ArcFace, VGGFace, and SphereFace highlighted LW-DCGAN’s superior accuracy, recall rates, and smaller model size. In summary, LW-DCGAN offers a robust and scalable solution for occluded face recognition, with potential applications in broader image generation contexts. Future work will focus on optimizing LW-DCGAN for the rapid and precise classification of various occluded or blurred dynamic visual data streams. Code and Data AvailabilitySome or all data, models, or codes generated or used during the study are available from the corresponding author upon request. AcknowledgmentsThis work was partly supported by the Key R&D projects in Henan Province (Grant No. 241111211800), the Key Scientific and Technological Project of Henan Province (Grant Nos. 232102111128, 222102210098, 222102320181, 212102210431, and 212102310087), in part by the Major Special Project of Xinxiang City (Grant No. 21ZD003), in part by the Key Scientific Research Projects of Colleges and Universities in Henan Province (Grant Nos. 23B520003, 21A520001, and 20A520013), and in part by the Henan Province Postdoctoral Support Program (Grant No. HN2022165). ReferencesI. Goodfellow et al.,
“Generative adversarial nets,”
Adv. Neural Inf. Processing Syst., 27 2672
–2680 1049-5258
(2014).
Google Scholar
A. Radford, L. Metz and S. Chintala,
“Unsupervised representation learning with deep convolutional generative adversarial networks,”
(2015). Google Scholar
T. Karras et al.,
“Progressive growing of GANs for improved quality, stability, and variation,”
(2017). Google Scholar
G. Luo et al.,
“Geometry sampling-based adaption to DCGAN for 3D face generation,”
Sensors, 23
(4), 1937 https://doi.org/10.3390/s23041937
(2023).
Google Scholar
M. Arjovsky, S. Chintala and L. Bottou,
“Wasserstein generative adversarial networks,”
in Proc. Int. Conf. Mach. Learn.,,
214
–223
(2017). Google Scholar
P. Isola, J. Y. Zhu and T. Zhou,
“Image-to-image translation with conditional adversarial networks,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,,
1125
–1134
(2017). Google Scholar
J. Wright et al.,
“Robust face recognition via sparse representation,”
IEEE Trans. Pattern Anal. Mach. Intell., 31
(2), 210
–227 ITPIDJ 0162-8828
(2008).
Google Scholar
L. Zhang, M. Yang and X. Feng,
“Sparse representation or collaborative representation: which helps face recognition?,”
in Proc. IEEE Int. Conf. Comput. Vision,
471
–478
(2011). Google Scholar
W. Ou et al.,
“Robust face recognition via occlusion dictionary learning,”
Pattern Recognit., 47
(4), 1559
–1572 https://doi.org/10.1016/j.patcog.2013.10.017
(2014).
Google Scholar
H. Zheng et al.,
“Laplacian-uniform mixture-driven iterative robust coding with applications to face recognition against dense errors,”
IEEE Trans. Neural Network Learn. Syst., 31
(9), 3620
–3633 https://doi.org/10.1109/TNNLS.2019.2945372
(2019).
Google Scholar
I. Q. Mundial et al.,
“Towards facial recognition problem in COVID-19 pandemic,”
in Proc. IEEE 4th Int. Conf. Electr., Telecommun. Comput. Eng. (ELTICOM),
210
–214
(2020). Google Scholar
H. N. Vu, M. H. Nguyen and C. Pham,
“Masked face recognition with convolutional neural networks and local binary patterns,”
Appl. Intell., 52
(5), 5497
–5512 https://doi.org/10.1007/s10489-021-02728-1
(2022).
Google Scholar
D. Montero et al.,
“Boosting masked face recognition with multi-task ArcFace,”
(2021). Google Scholar
W. Hariri,
“Efficient masked face recognition method during the COVID-19 pandemic,”
Signal Image Video Process., 16
(3), 605
–612 https://doi.org/10.1007/s11760-021-02050-w
(2022).
Google Scholar
R. Golwalkar and N. Mehendale,
“Masked-face recognition using deep metric learning and FaceMaskNet-21,”
Appl. Intell., 52
(11), 13268
–13279 https://doi.org/10.1007/s10489-021-03150-3
(2022).
Google Scholar
Y. Zhang, X. Peng and Y. Guo,
“Lightweight network for masked face recognition based on improved dual attention mechanism,”
in Proc. IEEE Int. Conf. Mechatron. and Autom. (ICMA),
1621
–1626
(2023). Google Scholar
B. Huang et al.,
“PLFace: progressive learning for face recognition with mask bias,”
Pattern Recognit., 135 109142 https://doi.org/10.1016/j.patcog.2022.109142
(2023).
Google Scholar
Y. Ge et al.,
“Masked face recognition with convolutional visual self-attention network,”
Neurocomputing, 518 496
–506 https://doi.org/10.1016/j.neucom.2022.10.025
(2023).
Google Scholar
W. C. Cheng, H. C. Hsiao and L. H. Li,
“Deep learning mask face recognition with annealing mechanism,”
Appl. Sci., 13
(2), 732 https://doi.org/10.3390/app13020732
(2023).
Google Scholar
M. Zhong et al.,
“MaskDUF: data uncertainty learning in masked face recognition with mask uncertainty fluctuation,”
Expert Syst. Appl., 238 121995 https://doi.org/10.1016/j.eswa.2023.121995
(2024).
Google Scholar
M. O. Faruque, M. R. Islam and M. T. Islam,
“Advanced masked face recognition using robust and light weight deep learning model,”
Int. J. Comput. Appl., 975 8887
(2024).
Google Scholar
S. Sharma et al.,
“KInsight: a robust framework for masked face recognition,”
Braz. Arch. Biol. Technol., 67 e24230917
(2024).
Google Scholar
C. Li et al.,
“Look through masks: towards masked face recognition with de-occlusion distillation,”
in Proc. 28th ACM Int. Conf. Multimedia,
3016
–3024
(2020). Google Scholar
Y. Fu et al.,
“LE-GAN: unsupervised low-light image enhancement network using attention module and identity invariant loss,”
Knowl.-Based Syst., 240 108010 https://doi.org/10.1016/j.knosys.2021.108010 KNSYET 0950-7051
(2022).
Google Scholar
B. Chen et al.,
“Locally GAN-generated face detection based on an improved Xception,”
Inf. Sci., 572 16
–28 https://doi.org/10.1016/j.ins.2021.05.006
(2021).
Google Scholar
X. Zhang et al.,
“DE-GAN: domain embedded GAN for high quality face image inpainting,”
Pattern Recognit., 124 108415 https://doi.org/10.1016/j.patcog.2021.108415
(2022).
Google Scholar
J. Lin, Y. Li and G. Yang,
“FPGAN: face de-identification method with generative adversarial networks for social robots,”
Neural Network, 133 132
–147 https://doi.org/10.1016/j.neunet.2020.09.001
(2021).
Google Scholar
Y. Zhang, M. Ding and Y. Bai,
“Detecting small faces in the wild based on generative adversarial network and contextual information,”
Pattern Recognit., 94 74
–86 https://doi.org/10.1016/j.patcog.2019.05.023
(2019).
Google Scholar
D. S. Trigueros, L. Meng and M. Hartnett,
“Generating photo-realistic training data to improve face recognition accuracy,”
Neural Network, 134 86
–94 https://doi.org/10.1016/j.neunet.2020.11.008
(2021).
Google Scholar
X. Yang et al.,
“Semantic face completion based on DCGAN with dual-discriminator,”
in Proc. 2021 7th Annu. Int. Conf. Network and Inf. Syst. for Comput. (ICNISC),
10
–14
(2021). Google Scholar
T. P. Hong et al.,
“Conditional-GAN-based face inpainting approaches with symmetry and view-degree utilization,”
IEEE Access, 8 193459
–193470
(2020).
Google Scholar
F. Huang et al.,
“Cyclic style generative adversarial network for near infrared and visible light face recognition,”
Appl. Soft Comput., 150 111096 https://doi.org/10.1016/j.asoc.2023.111096
(2024).
Google Scholar
G. Andrew and Z. Menglong,
“MobileNets: efficient convolutional neural networks for mobile vision applications,”
10 151 https://doi.org/10.48550/arXiv.1704.04861
(2017).
Google Scholar
M. Sandler et al.,
“MobileNetv2: inverted residuals and linear bottlenecks,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
4510
–4520
(2018). Google Scholar
A. Howard et al.,
“Searching for mobilenetv3,”
in Proc. IEEE/CVF Int. Conf. Comput. Vision,
1314
–1324
(2019). Google Scholar
X. Zhang et al.,
“ShuffleNet: an extremely efficient convolutional neural network for mobile devices,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
6848
–6856
(2018). Google Scholar
N. Ma et al.,
“ShuffleNet v2: practical guidelines for efficient CNN architecture design,”
in Proc. Eur. Conf. Comput. Vision (ECCV),
116
–131
(2018). Google Scholar
M. Tan and Q. Le,
“EfficientNet: rethinking model scaling for convolutional neural networks,”
in Proc. Int. Conf. Mach. Learn.,,
6105
–6114
(2019). Google Scholar
K. Han et al.,
“GhostNet: more features from cheap operations,”
in Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit.,
1580
–1589
(2020). Google Scholar
J. Liu et al.,
“FDDWNet: a lightweight convolutional neural network for real-time semantic segmentation,”
in Proc. ICASSP 2020-2020 IEEE Int. Conf. Acoust. Speech and Signal Process. (ICASSP),
2373
–2377
(2020). Google Scholar
Y. Chen et al.,
“Mobile-former: bridging MobileNet and transformer,”
in Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit.,
5270
–5279
(2022). Google Scholar
Z. Lyu et al.,
“A GPU-free real-time object detection method for apron surveillance video based on quantized MobileNet-SSD,”
IET Image Process., 16
(8), 2196
–2209 https://doi.org/10.1049/ipr2.12483
(2022).
Google Scholar
P. S. P. Kavyashree and M. El-Sharkawy,
“Compressed MobileNet v3: a light weight variant for resource-constrained platforms,”
in Proc. IEEE 11th Annu. Comput. Commun. Workshop and Conf. (CCWC),,
0104
–0107
(2021). Google Scholar
H. Shi, Q. Zhou and Y. Ni,
“DPNET: dual-path network for efficient object detection with lightweight self-attention,”
in Proc. IEEE Int. Conf. Image Process. (ICIP),
771
–775
(2022). Google Scholar
L. Jia et al.,
“MobileNet-CA-YOLO: an improved YOLOv7 based on the MobileNetV3 and attention mechanism for rice pests and diseases detection,”
Agriculture, 13
(7), 1285 https://doi.org/10.3390/agriculture13071285
(2023).
Google Scholar
C. H. Lee et al.,
“MaskGAN: towards diverse and interactive facial image manipulation,”
in Proc. IEEE/CVF Conf. Comput. Vision and Pattern Recognit.,
5549
–5558
(2020). Google Scholar
X. P. Burgos-Artizzu, P. Perona and P. Dollár,
“Robust face landmark estimation under occlusion,”
in Proc. IEEE Int. Conf. Compute. Vision,
1513
–1520
(2013). Google Scholar
G. B. Huang et al.,
“Labeled faces in the wild: a database for studying face recognition in unconstrained environments,”
in Workshop Faces in ‘Real-Life’ Images: Detect., Alignment, and Recognit.,
(2008). Google Scholar
F. Boutros et al.,
“MFR 2021: masked face recognition competition,”
in Proc. IEEE Int. Joint Conf. Biometrics (IJCB),
1
–10
(2021). Google Scholar
BiographyYingying Lv received her BS degree from the School of Information Engineering, Henan University of Science and Technology, China, in 2008 and her MS degree from the School of Information Engineering, Zhengzhou University, China, in 2011. She is currently a lecturer at the School of Computer Science and Technology, Henan Institute of Science and Technology. Her research interests include deep learning, neural networks, and intelligent computing. Jianping Wang received his BS degree from the Department of Computer Science and Technology, Shaanxi Normal University, Xi’an, China, in 2004; his MS degree from the Department of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China, in 2011; and his PhD in information and communication engineering from Wuhan University of Technology, Wuhan, China, in 2019. He completed his postdoctoral research in control science and engineering at Henan University of Science and Technology, Henan, China, in 2024. He is currently a professor and the deputy dean at the School of Computer Science and Technology, Henan Institute of Science and Technology. His research interests lie in the areas of intelligent computing, software-defined networks, and wireless sensor networks. |