Localization of image forgery has gained significance now as image deception techniques and software are widely used. Currently, deep learning–oriented approaches and generative adversarial network (GAN)-based models are used for forgery detection and localization. But, the generation of images from noise is a challenge, which adds complexity to architecture resulting in bulk data requirements and long training times. We highlight a redesigned framework of GAN with comparatively less architectural complexity ensuring good performance with an acceptable amount of training data and training times. The proposed network consists of a generator based on an encoder-decoder architecture that generates masks corresponding to the images, followed by a mask refinement network to enhance the generated masks. The generator training is done with images and their corresponding masks and the refinement network using the truth masks. Eventually, after combined training using the images and matching masks, the combined network locates the forged areas. The reframed network is experimented with three publicly available datasets, and the model performance is qualitatively analyzed by the localization maps generated and is quantitatively analyzed by the performance metrics such as receiver operator characteristics, area under the curve (AUC), precision, and |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Education and training
Data modeling
Performance modeling
Gallium nitride
Batch normalization
Counterfeit detection
Feature extraction