PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We report a virtual image refocusing framework for fluorescence microscopy, which extends the imaging depth-of-field by ~20-fold and provides improved lateral resolution. This method utilizes point-spread function (PSF) engineering and a cascaded convolutional neural network model, which we termed as W-Net. We tested this W-Net architecture by imaging 50 nm fluorescent nanobeads at various defocus distances using a double-helix PSF, demonstrating ~20-fold improvement in image depth-of-field over conventional wide-field microscopy. W-Net architecture can be used to develop deep-learning-based image reconstruction and computational microscopy techniques that utilize engineered PSFs and can significantly improve the spatial resolution and throughput of fluorescence microscopy.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Xilin Yang, Luzhe Huang, Yilin Luo, Yichen Wu, Hongda Wang, Yair Rivenson, Aydogan Ozcan, "Three-dimensional virtual refocusing of point-spread function engineered images using cascaded neural networks," Proc. SPIE PC12019, AI and Optical Data Sciences III, PC1201906 (9 March 2022); https://doi.org/10.1117/12.2608278