In medical and microscopy imaging applications where the object is not directly visible, images are never identical to the ground truth. In three-dimensional structured illumination microscopy (3D-SIM), acquired images taken from the object have limited resolution due to the the point spread function (PSF) of the imaging system. Additionally, due to the data acquisition process, images taken under low light and in the presence of electrooptical noise can have a low signal-to-noise ratio as well as suffer from other undesirable aberrations. To obtain a high-resolution restored image, the data must be digitally processed. The inverse imaging problem in 3D-SIM has been solved using various computational imaging techniques. Traditional model-based computational approaches can result in image artifacts due to required, yet not accurately known system parameters. Furthermore, some iterative computational imaging methods can be computationally intensive. Deep learning (DL) approaches, as opposed to traditional image restoration methods, can tackle the issue without access to the analytical model. Although some are effective, they are biased since they do not use the 3D-SIM model. This research aims to provide an unrolled physics-informed (UPI) generative adversarial network (UPIGAN) for reconstructing 3D-SIM images utilizing data samples of mitochondria from a 3D-SIM system. This design uses the benefits of physics knowledge in the unrolling step. Moreover, the GAN employs a Residual Channel Attention super-resolution deep neural network (DNN) in its generator architecture. The results from both a qualitative and quantitative comparison, present a positive impact on the reconstruction when the UPI term is used in the GAN versus using the GAN architecture without it.
|