With the massive improvement in deep learning, application areas are also expanding significantly particularly for multimedia information forensic, antiforensic, and counter anti-forensic in the field of forensic analysis. Generative adversarial network (GAN) is one of the most popular deep learning models which is widely being used for anti-forensic JPEG compressed images to generate ground truth like images for making fool the JPEG compression detector. In this paper, we analyze the generator and discriminator in the GAN model to generate more realistic images and detect between generated and original images. We investigate the proposed method using two different GAN models in which the generators and discriminators are trained separately in each model. Then we use the generated images produced by the generator in one model to detect whether it is generated or original images with discriminator in another model and vice-versa. The reconstructed images produced by both generators are more realistic in visual perception and have the better quality that can deceive the JPEG compression detector. The discriminators are capable of differentiating between the generated and real images only if the generated images are reconstructed by their own generators. If we try to classify the generated images obtained by the generator in one model using discriminator in another model, the discrimination results reduce at an alarming rate.
We present an intracoding method that is applicable to depth map coding in multiview plus depth systems. Our approach combines skip prediction and plane segmentation-based prediction. The proposed depth intraskip prediction uses the estimated direction at both the encoder and decoder, and does not need to encode residual data. Our plane segmentation-based intraprediction divides the current block into biregions, and applies a different prediction scheme for each segmented region. This method avoids incorrect estimations across different regions, resulting in higher prediction accuracy. Simulation results demonstrate that the proposed scheme is superior to H.264/advanced video coding intraprediction and has the ability to improve the subjective rendering quality.
KEYWORDS: Video, Cameras, Visualization, Phase modulation, 3D video compression, Video compression, Optical engineering, 3D applications, Quantization, Video coding
Schemes to enhance human visual perception in three-dimensional (3-D) video applications with depth map data are proposed. Depth estimation is an important part of free viewpoint television and 3-DTV because the accuracy of depth information directly affects the synthesized video quality at an intermediate viewpoint. However, generating an accurate depth map is a complex computational process that makes real-time implementation challenging. In order to obtain accurate depth information with low complexity, a depth map relabeling algorithm and a hybrid matching algorithm are proposed in the depth estimation step. These techniques in acquisition for a depth map are based on human perception, which is more sensitive to moving objects than to a static background. Also, they consider the importance of appropriate processing of object boundaries. Experimental results demonstrate that the proposed schemes provide a synthesized view with both higher subjective visual quality and better objective quality in terms of peak signal-to-noise ratio than legacy depth estimation reference software.
This paper presents an efficient depth map coding method based on color information in multi-view plus depth (MVD)
system. As compared to the conventional depth map coding in which depth video is separately coded, the proposed
scheme involves color information for depth map coding. In details, the proposed algorithm subsamples input depth data
along temporal direction to reduce the bit-rate, and non-encoded depth frames are recovered at the decoder side guided
by the motion information extracted from the decoded color video. The simulation results shows the high coding
efficiency of the proposed scheme, and it also shows that recovered depth frame are not much different from the
reconstructed one. Furthermore, it can even provide temporally consistent depth map which results in better subjective
quality for view-interpolation.
A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding
of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people
in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures
in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand,
video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video
as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded
video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video
is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we
present a method to remove film grain noise from image/video without distorting its original content. Besides, we
describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The
proposed model generates the film grain noise that is close to the real one in terms of power spectral density and
cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed
scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.