We present a cross-modality super-resolution microscopy method based on the generative adversarial network (GAN) framework. Using a trained convolutional neural network, our method takes a low-resolution image acquired with one microscopic imaging modality, and super-resolves it to match the resolution of the image of the same sample captured with another higher resolution microscopy modality. This cross-modality super-resolution method is purely data-driven, i.e., it does not rely on any knowledge of the image formation model, or the point-spread-function. First, we demonstrated the success of our method by super-resolving wide-field fluorescence microscopy images captured with a low-numerical aperture (NA=0.4) objective to match the resolution of images captured with a higher NA objective (NA=0.75). Next, we applied our method to confocal microscopy to super-resolve closely spaced nano-particles and Histone3 sites within HeLa cell nuclei, matching the resolution of stimulated emission depletion (STED) microscopy images of the same samples. Our method was also verified by super-resolving the diffraction-limited total internal reflection fluorescence (TIRF) microscopy images, matching the resolution of TIRF-SIM (structured illumination microscopy) images of the same samples, which revealed endocytic protein dynamics in SUM159 cells and amnioserosa tissues of a Drosophila embryo. The super-resolved object features in the network output show strong agreement with the ground truth SIM reconstructions, which were synthesized using 9 diffraction-limited TIRF images, each with structured illumination. Other than resolution enhancement, our method also offers an extended depth-of-field and improved signal-to-noise ratio (SNR) in the network inferred images compared against the corresponding ground truth images.
|