|
1.IntroductionThe observation of three-dimensional (3-D) behavior of animals and 3-D movement of living cells has significant implications for neurobiology and medical science,1,2 which require a microscope providing a high-spatiotemporal resolution. The scanning-type 3-D microscopy represented by confocal microscopy provides high-resolution 3-D images up to the Abbe limit but has a fundamental frame-rate limitation because of its scanning time.3 A number of approaches have been proposed to capture 3-D information by reducing the scanning time4 or by mapping the axial information onto a planar capturing device.5 However, previous approaches focused on reconstituting the obtained 3-D information with post-processing rather than visualizing it with 3-D display systems in real time, which enables observation of the sample’s 3-D behavior and direct interaction. In 3-D microscopy, how to display the 3-D information as well as how to obtain it is vitally important. The best way to deliver 3-D information to the observer is to present whole 3-D images with a 3-D display system. Various 3-D display systems including stereoscopic display, multiview display, integral imaging, and holographic display were designed to reconstruct microscopic samples in 3-D.6–8 Light-field microscopy (LFM)1,6,9–13 is an adequate method to watch 3-D behavior of the sample; it obtains 3-D information in a single-shot with a microlens array (MLA) at the image plane. The pixels behind each microlens record both the position and the direction of light rays, and the captured four-dimensional (4-D) light field can be reconstructed to a 3-D scene in real time not only computationally1,9–12 but also optically6,14 with a 3-D display system. Based on the structural symmetry between LFM and integral imaging, the in-vivo micro-objects were reconstructed in 3-D space in real time.15,16 However, despite its special and powerful characteristics, LFM has not been used practically in biological and medical imaging because of its reduced lateral resolution. The MLA in front of a charge-coupled device (CCD) sacrifices the lateral resolution and enhances the depth of field (DOF), and the total amount of obtained information is limited by the number of pixels and diffraction limits.9 Recently, a light-field deconvolution microscope (LFDM) was introduced to deconvolve a high-resolution 3-D scene using a point spread function of LFM computed with wave optics.1,10 However, the post-processing time was too long for the real-time observation, and the improved resolution (up to with NA objective and MLA) was worse than a two-dimensional (2-D) optical microscope (OM, with NA objective). Furthermore, the maximum resolution at the native object plane was limited to the pitch of MLA, and it suffered from reconstruction artifact around the native object plane. Here, we present dual-dimensional microscopy (DDM) that captures both 2-D and light-field images of an in-vivo sample simultaneously and optically visualizes it with a 3-D display system in real time. From our preliminary approach,17 a real-time upsampling algorithm was proposed in which an upsampled light field is synthesized from the captured light field and 2-D images based on Fourier slice photography theorem.18,19 The whole process from capturing to displaying is done in real time with a parallel computation algorithm. The upsampled light-field images are optically reconstructed with a computational light-field display (LFD). The wave optics simulation verifies that the DDM provides higher resolution up to the diffraction limit and higher correspondence to the reference 3-D data than LFM. Compared with conventional LFM, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. A DDM setup is implemented by appending a dual-view observation attachment to an LFM setup. We present a real-time 3-D interactive experiment with Caenorhabditis elegans (c. elegans) in which we can observe the 3-D behavior (spatiotemporal behavior with the enhanced depth of field) of c. elegans via a computational LFD and track it with the stage. We also suggest a bandwidth reshaping method by applying an additional aperture inside DDM to deconvolve 3-D volume without reconstruction artifacts. The reconstruction artifacts region could be eliminated by increasing the DOF of the OM path. 2.Dual-Dimensional Microscopy and Light-Field UpsamplingFigure 1(a) shows the schematic diagram of a DDM setup. The light rays originated from the specimen are collimated by the infinity-corrected objective lens. Then, the collimated light beam is divided into two at the beam splitter. One path is identical to an LFM (LFM-path), and the other path is an optical microscope (OM-path). The transmitted light beam is converged by a tube lens, and a light-field image is obtained with an MLA located at the image plane and a CCD (CCD1) focusing on the back focal plane of the MLA. The reflected light beam is reflected again by a mirror and converged after a tube lens, and a 2-D image is captured with the other CCD (CCD2) located at the image plane. Note that the MLA and the CCD2 are both located at the native image plane. The CCD1 and CCD2 are synchronized with the external signal. The DDM setup can capture 2-D images and light-field images simultaneously with this optical configuration. The obtained 2-D and light-field images can be synthesized to an upsampled light-field image with an algorithm based on Fourier slice photography theorem.18–20 Assume that the 4-D light field of a sample at the object plane () is . Then, the 2-D image captured from CCD2 can be expressed as follows: where and denote the maximum numerical aperture (NA) of the objective lens in the - and -directions, respectively, is the normalized light field at the image plane considering the angle between the radiance and the image plane, and is a capturing constant. In the DDM setup, the CCD1 in the LFM-path directly samples , and the resolution is limited by the lens pitch and number of pixels. From the light-field image captured with CCD1, the normalized light-field spectrum can be calculated as follows:18 where and are the maximum field of views in the - and -directions, respectively. From Eq. (2), the Fourier slice of the light field can be generated with the slicing operator.18,20 In particular, the Fourier slice located at the plane (, ) is interpreted as follows:The term inside the parentheses can be substituted with the captured 2-D image as shown in Eq. (1) as follows: and Eq. (3) can be expressed as 2-D Fourier transform of as follows: where indicates the Fourier spectrum of . Note that is simply zero beyond the field of view, so Eq. (4) is identical to a Fourier transform. As MLA and CCD2 are located at the native image plane, DDM always satisfies Eq. (5). Therefore, the captured light-field image and 2-D image can be fused at the light-field Fourier spectral domain, and we name this process light-field upsampling because the total bandwidth of the light-field spectrum is enhanced with the additional 2-D image.Figures 1(b)–1(i) show an example of light-field upsampling in DDM. Assume a sample whose normalized light field at the image plane and its light-field spectrum are shown in Figs. 1(b) and 1(c), respectively. Using DDM, a light-field image with a low resolution [Fig. 1(d)] and a 2-D image with a high resolution [Fig. 1(f)] are obtained. The resolution of the obtained light-field image is , where and are the number of microlenses in the - and -directions, respectively, and and are the number of pixels behind each lens in the - and -directions, respectively. The resolution of captured 2-D image is , where and are the number of pixels in CCD2 in the - and -directions, respectively. To fuse those two images in light-field spectral domain with Eq. (5), the lateral resolution of should be matched to that of () during the light-field upsampling. Various super resolution algorithms can be applied with the consideration of image characteristics of bio samples.21,22 In this paper, a simple zero padding method in Fourier domain is applied without any texture assumptions for the real-time calculation and robustness.23 Figure 1(e) shows zero-padded light-field spectrum generated from . Meanwhile, the Fourier slice of the captured 2-D image is derived with the 2-D Fourier transform as shown in Fig. 1(g). By substituting the low-resolution Fourier slice for the high-resolution Fourier slice of light-field spectrum as shown in Fig. 1(i), the upsampled light-field spectrum is achieved. Here, a constant is multiplied to match the intensity of spectrum. Finally, the upsampled light field is derived with 4-D inverse Fourier transform of the spectrum. As shown in Fig. 1, the upsampled light field from DDM [Fig. 1(h)] has higher resolution and higher correspondence to original light field [Fig. 1(b)] than the result from LFM only [Fig. 1(d)]. As the upsampling process is composed of repeated Fourier transforms and inverse Fourier transforms, the real-time calculation is available with parallel computation.24,25 Previously, a similar light-field upsampling algorithm was introduced by Lu et al.11 They captured a 2-D image and a light-field image with a CCD and updated light-field spectrum iteratively with the high-resolution 2-D image. Compared with the previous method, we built a noniterative light-field upsampling algorithm for real-time observation, and this simplicity reduces the total calculation time and enables real-time 3-D observation. 3.Real-Time Three-Dimensional Visualization of Upsampled Light-Field Image Using a Computational Light-Field DisplayIn LFD, light-fields reconstructed from stacked LCD panels can be expressed as a multiplication of pixels. In a computational LFD with two LCD panels, the light-field reproduced by one pixel at the frontal panel and the other pixel at the rear panel can be represented as . The whole light-field distribution is a multiplication of the all possible pixel pairs within a maximum viewing angle. The layer images and are optimized with the iterative nonnegative matrix factorization (NNMF) algorithm.26–29 Here, the additive update rules are applied for the factorization. Update rules for and are defined as follows: where and are the reconstructed light field of current iteration and target light field, respectively, and where and are the projection matrices of frontal and rear panel, and the diag and * denote diagonal operator and matrix multiplication, respectively.The upsampled light field was provided to the observer with a computational LFD in real-time based on the parallel computation.24,25 For the real-time 3-D visualization, both light-field upsampling and layer image optimization should be performed in real time. Figure 2(a) shows the parallel computation algorithm of light-field upsampling. As the light-field upsampling for one scene is composed of a 4-D Fourier transform, 2-D Fourier transform, and 4-D inverse Fourier transform, the key for the real-time computation was parallelization of multidimensional Fourier transforms. The algorithm ran multiple one-dimensional (1-D) fast Fourier transform threads on a GPU (cuFFT), with CUDA programming.25 At the GPU device, the light-field upsampling was performed in parallel as shown in Fig. 2(a). Each 2-D and 4-D Fourier transform was divided into 1-D Fourier transforms, and each 1-D Fourier transform was parallelized with cuFFT function, provided by CUDA. For example, the 4-D FT of () was done with four 1-D cuFFTs. In the first 1-D cuFFT for , the batch size was the multiplication of the number of the other three components (). After each 1-D cuFFT, the transpose function should be applied for the next 1-D cuFFT. After 4-D Fourier transform of light-field image and 2-D Fourier transform of the 2-D image, they were fused in the light-field spectral domain. Here, the high-frequency component was padded with zero for the resolution matching. After fusing, 4-D IFFT was performed in parallel in the same way. The upsampled light field was converted to optimized layer images of a computational LFD. The real-time layer image optimization method in the computational LFD was introduced in previous works.26–28 Figure 2(b) shows the parallel algorithm of the layer image optimization. The target light-field was set to the upsampled light-field (), where is the number of total viewpoints and is the number of pixels for a single layer (). At first, the memory was allocated in the GPU device for the frontal layer image (), rear layer image (), projection matrices and (), the contribution to the light field of the frontal and rear layers and (), and the reconstructed light-field (). Then, each layer image is initialized with certain values. The initial condition could be random values between 0 and 1 or the estimation from the target light field. Here, the frontal and rear layer images and were initialized with the central view image of target light-field . Then, the layer images were updated with the iterative update rules [Eq. (7)]. Between each iteration, the reconstructed light-field was updated. As the frontal and rear layer images are updated in series, each update step can be fully parallelized. The iteration number was set to 5 for the convergence of NNMF and the real-time calculation. Finally, the optimized frontal and rear layer images were obtained. 4.Simulation Results4.1.Wave Optics Simulation of Dual-Dimensional MicroscopyTo quantitatively verify the light-field upsampling and DDM, wave optics simulations were performed with the reference 3-D data. A focal stack of the convallaria sample, which was obtained with a confocal microscope, was used as the reference data. The images of DDM setup obtained from the reference data were simulated with the point spread functions of the LFM and the OM, which can be calculated with wave optics.10,23,30 The resultant light field and 2-D images were generated with the integration of convolution of the focal stacks and point spread functions. Note that the point spread functions in LFM vary with both axial and lateral positions of the sample, whereas those in OM vary with axial position only. Further information about image simulations of LFM was introduced in previous works.1,10 Figure 3(a) shows a simulated light-field image and 2-D image of reference data captured with a DDM setup. A NA objective lens and a pitch, MLA are assumed in the simulation. The reference 3-D data of convallaria obtained with a confocal microscope (ZEISS, LSM-700) were resized to . The resolution of the light-field image was and that of the 2-D images was . As the aperture of the objective lens created small circles behind each microlens as shown in Fig. 3(a), we utilized only central view images for the upsampling. The upsampled light-field image [Fig. 3(b) right] was generated with these two images and was compared with the original light field [Fig. 3(b) left] and the light field obtained only with the LFM [Fig. 3(b) center]. For the fair comparison, the light-field images from the LFM (center column) were upsampled with the zero padding method. The simulation results were evaluated with the peak signal-to-noise ratio (PSNR) value to the original ground truth, which is defined as follows: where is the maximum possible pixel value of the image and MSE is the mean squared error. The numbers inside the image in Fig. 3(b) are the PSNR value to the original light field. All view images from upsampled light fields from DDM show higher coincidence with the original light fields than those from the LFM only ( difference). As shown in Fig. 3(c), DDM could reproduce detailed high-resolution components compared with the LFM results.4.2.Three-Dimensional Visualization of Upsampled Light Field with a Computational Light-Field DisplayThe layer images are optimized with the iterative NNMF to reproduce the target light fields.26–28 Figure 3(d) shows simulation results of layer image optimization in computational LFD for DDM. The frontal and rear layer images were generated with the NNMF algorithm using additive update rules. In the simulation, we assumed a computational LFD system composed of two 22-in. LCDs with resolution and 12-mm gap, which was actually used in the experiments. The layer images were generated to reconstruct perspective view images, and the maximum viewing angle was 12.19 deg. The simulation results of the reconstructed perspective images with LFD are shown in the right column of Fig. 3(b). The PSNR values to the original light field are slightly lower than those of upsampled light-field images, but they are still much higher than those of LFM. As the biomedical samples usually show higher correspondence between directional view images, the reconstructed light-field images agree with the target light-field images. As LFD reproduced the upsampled light-field images with a higher correspondence, the observer can perceive the in-vivo samples directly through the computational LFD. 5.Experimental Results5.1.Experimental SetupA DDM was implemented based on a transmissive OM (Olympus, BX-51T) as shown in Fig. 4(a). A side-by-side observation body (Olympus, BX2-SDO) was attached to the microscope for the separation of LFM-path and OM-path; it is composed of a beam splitter and a mirror as introduced in Fig. 1(a). All experiments were performed with a NA dry apochromat objective (Olympus, UPLFLN40X). A MLA with 2.5-mm focal length (FresnelTech) was fixed at the customized MLA holder and one-axis stage. The relay lens located at the LFM-path was composed of nose-to-nose-connected two camera lenses (Canon, EF 100 mm Macro USM). Two pixel pitch, 32-Hz frame-rate CCDs with were utilized (Allied Vision Technology, Prosilica GX2300C). One focused on the back focal plane of the MLA with the relay lens, whereas the other was located at the image plane with the camera adapter (Olympus). The MLA holder, relay lenses, and CCD1 were aligned with an optical jig mounted on the tube lens. The capturing of two CCDs was synchronized by external signals generated from a data acquisition board (National Instrument). The captured 2-D images and light-field images were transferred to a GPU device (NVIDIA, GTX1080) with VIMBA SDK. For LFD, two IPS-LCD monitors (LG-22MP57HQ-P) were utilized. The system was implemented without an additional light source by disassembling one LCD (frontal layer) and using the backlight unit of the other (rear layer). In the IPS-LC panel, the horizontal and vertical linear polarizers are attached to the frontal and rear sides, respectively. Therefore, by locating the frontal panel upside down, we can use IPS-LC panels without detaching the polarizer. Two panels were stacked with the precise lateral and angular calibrations. The pixel pitch of the LCD was , the resolution was , and the gap between panels was 12 mm. The maximum viewing angle was 12.49 deg, and the light-field images were utilized as the target light field. The frontal and rear panels were driven by the GPU and showed optimized layer images calculated in the GPU device. 5.2.Dual-Dimensional MicroscopeThe implemented DDM provides () light-field images and () 2-D images in 20 Hz with the synchronization. The 3-D behavior of c. elegans was captured with the DDM setup continuously in 20 Hz (see Video 1). Figures 4(b) and 4(c) show upsampled light-field images with the zero padding algorithm (left) and the proposed light-field upsampling algorithm (right) in 0.00 and 5.20 s, respectively. The central view images are used for the light-field upsampling. Note that only the captured light-field image was utilized for the left result, whereas the light field and 2-D images were both used for the right result. The results from the DDM setup provided higher resolution and more detailed information of c. elegans compared with the conventional LFM (see Video 2). As the perspective views were generated from a single exposure, we could obtain the high-resolution video of c. elegans while changing the perspective views freely in real time. The Video 3 shows an example of reconstitution of c. elegans movement with the change of perspective view and time frame. The DDM makes it possible to observe the 3-D movement of a live sample with a higher resolution. The direct comparison between the reconstructed perspective view images and the true perspective view images was impossible because we cannot obtain the exact depth map of the moving c. elegans. Nevertheless, we can conclude two facts from the experiments. First, the implemented DDM setup provides light-field videos with a higher resolution than that of a LFM. Second, the experimental results accord well with the simulation results, which clearly verified the correlation between the images from the DDM setup and the reference data. 5.3.Real-Time Light-Field Upsampling and Layer Image Optimization with Parallel ComputingFigure 2(c) shows the average computation time for every step in light-field upsampling. The image size indicates the resolution of the 2-D image, and each value is the average of 100 calculations. The total computation time increased with the image size because the 1-D array in every 1-D fast Fourier transform stage becomes longer too. For the same reason, most of the time was consumed at the final 4-D inverse Fourier transform stage, which dealt with the largest size light-field (). However, the total calculation time for a light-field image and an 2-D image was 225 ms, which could provide the upsampled light field in 4.4 Hz with a PC and a GPU (NVIDIA, GTX-1080). The lag should be reduced further using multiple GPUs. 5.4.Real-Time Three-Dimensional Observation with a Computational Light-Field DisplayWith the implemented DDM-LFD system, the observer could watch the 3-D movement of c. elegans in real time, and the direct operation such as tracking or focus changing was also possible. Figure 4(d) shows the optimized layer images and 3-D images of c. elegans reconstructed via a computational LFD at . The frontal and rear layer images were calculated in real time, so the c. elegans behavior was visualized in 3-D and in real time. The reconstructed 3-D images showed right parallax as shown in Fig 4(b) (Video 4). Compared with the target light field (Video 3), the reconstructed perspective view images showed uniform color tones along the viewpoints. As a computational LFD presents high-resolution light-field images beyond its data capacity (number of pixels) by the benefit of the correspondence between perspective view images, the reconstructed 3-D images might lose some information as shown in Video 4. However, the DDM-LFD provided correct 3-D images in real time, which were enough to watch the 3-D behavior of c. elegans. Furthermore, the original 2-D and light-field images and upsampled light field could be saved in a solid-state drive. 6.DiscussionThe simulation and experimental results showed that the DDM captures very high-resolution layer () and low-resolution volume ( of light fields) together. The experimental results showed that the reconstructed view images from DDM provide higher resolution than those from LFM in real time. However, light-field images from LFM contain more detailed information, and a higher resolution 3-D image can be restored with deconvolution.1,10,12 Here, we analyze the theoretical depth-dependent band limit of the DDM. It is known that the band limit of the LFM is inversely proportional to depth for large where the point light source forms the diffraction limited spot.10 Furthermore, the band limit has minimum value at , where the LFM could capture the images only with MLA sampling rate.9,10 In the intermediate region, the band limit is not well established but is known to have a quasiuniform resolution similar to the peak resolution. The whole depth-dependent band limit can be expressed as follows: where is the magnification of the objective, is the wavelength, is the MLA pitch and the Sparrow 2-point criterion is assumed.23 However, the band limit for the OM is calculated as Eq. (10) within the DOF , as follows: where is the refractive index, is CCD pixel pitch, and the Sparrow 2-point criterion is assumed too. Figure 5(a) shows the depth-dependent band limit and DOF of OM, LFM, and DDM. In LFM, the band limit is determined by the lenslet sampling rate at the native object plane (), and the reconstruction artifact often occurs in the deconvolution process around the plane []. Therefore, the additional OM-path in the DDM compensates for this degradation successfully. As the light-field upsampling algorithm simply substitutes the information from the native object plane, the band limit of the DDM follows the bigger value between and . The DDM captures the information up to the objective diffraction limit at the object plane, the peak-resolution of LFM for , and inversely proportional to for the rest as follows:The resolution upper bound of the obtained perspective view images in DDM is the wide-field diffraction limit at the native object plane. If the experimenter wants to focus on the information from other depths, it is easily achieved by moving the stage axially. Compared with the conventional OM, the observer can navigate the perfect depth plane while watching the low-resolution image from large DOF with the DDM setup. It helps to understand the 3-D behavior of the specimen and reduce the experiment time dramatically. Nevertheless, the discontinuity of the band-limit over depth makes the view image unnatural. The blur unexpectedly increases at the boundary (), which is different from any other images from conventional capturing devices. In this point of view, we can reshape the band limit by applying additional aperture only at the OM-path in front of the tube lens. As this additional aperture results in the smaller NA of the OM-path (), we can extend the DOF by sacrificing the maximum resolution. Figure 5(b) shows the band limit changes by applying different apertures in the OM-path. When the NA is large, the high-frequency components and high resolution are preserved, but the DOF becomes narrow; however, when the NA is small, vice versa. We can freely change the NA of the LFM-path and OM-path according to the need and experimental circumstances. When we changed the , the light-field upsampling algorithm could not be applied directly as the DOF is changed. Instead, we could reconstruct the depth information with the deconvolution. Compared with the results in previous works,1,10 the additional information from the OM-path can greatly compensate for the image loss not only at the native object plane but also at the reconstruction artifact zone. Two apertures—MLA and circular aperture—are similar to the coded apertures designed for different depths, and the methodology of multiple-coded apertures could be directly applied to the conventional LFDM method.31,32 DDM is not just a high-resolution 3-D imaging method; it is also a real-time interactive 3-D observation method of in-vivo samples. The real-time 3-D observation of micro-objects with a computational LFD enables the real-time 3-D interactive experiment. As shown in Videos 3 and 4, the experimenter can capture the full 3-D behavior of c. elegans with instant tracking and focus-changing by watching the 3-D videos of c. elegans. Due to its structural simplicity, DDM could be more effective with various fluorescent technologies. The combination of whole brain imaging and DDM could capture the neuronal activity of the entire nervous system of c. elegans with a high resolution. Our real-time 3-D interactive system is applicable not only to the biological experiments but also to practical clinical field, such as endoscopy,33 which examines the disease and gives treatment directly. Finally, the dual-dimensional imaging scheme is not limited to the microscopy. It can be applied to the real-scale imaging system, such as light-field cameras and integral imaging system. The light-field upsampling algorithm can be applied directly to the real-scale objects, which can dramatically enhance the lateral resolution. 7.ConclusionHere, we demonstrated a real-time high-resolution in-vivo 3-D observation method dubbed DDM. The higher resolution light-field image was obtained in real time by combining a light-field image from LFM and a 2-D image from OM. The upsampled light-field images of in vivo objects were shown with a stacked LFD in real time. Two synchronized CCDs captured both () light-field images and () 2-D images in 20 Hz with the synchronization, and the upsampled () light fields were generated from them. Then, the optimized layer images were generated for LFD from the upsampled light fields. The 3-D images were optically reconstructed in 3-D with LFD, so the observer could watch the 3-D movement of c. elegans through the DDM setup and directly interact with it by moving the stage. The simulation results showed that DDM greatly enhances the lateral resolution up to the diffraction limit at the native object plane and compensates for the image degradation at the native object plane. As DDM provided a very high-resolution layer and a low-resolution volume together, the experimenter can navigate the perfect depth plane by axially moving the stage. Furthermore, the band limit of DDM can be reshaped for various purposes by applying an additional aperture in OM-path. The structural simplicity of DDM encourages various applications over the field of microscopy. DDM can also be applied to the endoscopy33 or real-scale light-field cameras.34 DisclosuresThe authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. AcknowledgmentsThis research was supported by Projects for Research and Development of Police Science and Technology under Center for Research and Development of Police Science and Technology and Korean National Police Agency. (Grant No. PA-H000001). We wish to thank Professor Junho Lee and Dr. Daehan Lee (Department of Biological Sciences, Seoul National University) for the generous donation of the c. elegans samples used in this study. ReferencesR. Prevedel et al.,
“Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,”
Nat. Methods, 11
(7), 727
–730
(2014). https://doi.org/10.1038/nmeth.2964 JSIDE8 0734-17681548-7091 Google Scholar
H. Lee et al.,
“Nictation, a dispersal behavior of the nematode caenorhabditis elegans, is regulated by il2 neurons,”
Nat. Neurosci., 15
(1), 107
–112
(2012). https://doi.org/10.1038/nn.2975 NANEFN 1097-6256 Google Scholar
M. Rajadhyaksha et al.,
“In vivo confocal scanning laser microscopy of human skin: melanin provides strong contrast,”
J. Invest. Dermatol., 104
(6), 946
–952
(1995). https://doi.org/10.1111/1523-1747.ep12606215 JIDEAE 0022-202X Google Scholar
M. B. Ahrens et al.,
“Whole-brain functional imaging at cellular resolution using light-sheet microscopy,”
Nat. Methods, 10
(5), 413
–420
(2013). https://doi.org/10.1038/nmeth.2434 1548-7091 Google Scholar
S. Abrahamsson et al.,
“Fast multicolor 3d imaging using aberration-corrected multifocus microscopy,”
Nat. Methods, 10
(1), 60
–63
(2013). https://doi.org/10.1038/nmeth.2277 1548-7091 Google Scholar
J. Kim et al.,
“Real-time integral imaging system for light field microscopy,”
Opt. Express, 22
(9), 10210
–10220
(2014). https://doi.org/10.1364/OE.22.010210 OPEXFF 1094-4087 Google Scholar
Y. Kumagai et al.,
“Magnifying endoscopy, stereoscopic microscopy, and the microvascular architecture of superficial esophageal carcinoma,”
Endoscopy, 34
(05), 369
–375
(2002). https://doi.org/10.1055/s-2002-25285 ENDCAM Google Scholar
J.-H. Park,
“Recent progress in computer-generated holography for three-dimensional scenes,”
J. Inf. Disp., 18
(1), 1
–12
(2017). https://doi.org/10.1080/15980316.2016.1255672 Google Scholar
M. Levoy et al.,
“Light field microscopy,”
ACM Trans. Graphics, 25
(3), 924
–934
(2006). https://doi.org/10.1145/1141911 ATGRDFATGRDF 0730-03010730-0301 Google Scholar
M. Broxton et al.,
“Wave optics theory and 3-D deconvolution for the light field microscope,”
Opt. Express, 21
(21), 25418
–25439
(2013). https://doi.org/10.1364/OE.21.025418 OPEXFF 1094-4087 Google Scholar
C.-H. Lu, S. Muenzel and J. Fleischer,
“High-resolution light-field microscopy,”
in Computational Optical Sensing and Imaging,
CTh3B-2
(2013). Google Scholar
M. Levoy, Z. Zhang and I. McDowall,
“Recording and controlling the 4D light field in a microscope using microlens arrays,”
J. Microsc., 235
(2), 144
–162
(2009). https://doi.org/10.1111/jmi.2009.235.issue-2 JMICAR 0022-2720 Google Scholar
J. Kim et al.,
“F-number matching method in light field microscopy using an elastic micro lens array,”
Opt. Lett., 41
(12), 2751
–2754
(2016). https://doi.org/10.1364/OL.41.002751 OPLEDP 0146-9592 Google Scholar
J.-H. Jung, J. Kim and B. Lee,
“Solution of pseudoscopic problem in integral imaging for real-time processing,”
Opt. Lett., 38
(1), 76
–78
(2013). https://doi.org/10.1364/OL.38.000076 OPLEDP 0146-9592 Google Scholar
J. Kim et al.,
“Real-time capturing and 3D visualization method based on integral imaging,”
Opt. Express, 21
(16), 18742
–18753
(2013). https://doi.org/10.1364/OE.21.018742 OPEXFF 1094-4087 Google Scholar
F. Okano et al.,
“Real-time pickup method for a three-dimensional image based on integral photography,”
Appl. Opt., 36
(7), 1598
–1603
(1997). https://doi.org/10.1364/AO.36.001598 APOPAI 0003-6935 Google Scholar
J. Kim et al.,
“A single-shot 2D/3D simultaneous imaging microscope based on light field microscopy,”
Proc. SPIE, 9655 96551O
(2015). https://doi.org/10.1117/12.2185253 Google Scholar
R. Ng,
“Fourier slice photography,”
ACM Trans. Graphics, 24
(3), 735
–744
(2005). https://doi.org/10.1145/1073204 ATGRDF 0730-0301 Google Scholar
R. Ng et al.,
“Light field photography with a hand-held plenoptic camera,”
Comput. Sci. Tech. Rep., 2
(11), 1
–11
(2005). Google Scholar
J.-H. Park and K.-M. Jeong,
“Frequency domain depth filtering of integral imaging,”
Opt. Express, 19
(19), 18729
–18741
(2011). https://doi.org/10.1364/OE.19.018729 OPEXFF 1094-4087 Google Scholar
S. Farsiu et al.,
“Advances and challenges in super-resolution,”
Int. J. Imaging Syst. Technol., 14
(2), 47
–57
(2004). https://doi.org/10.1002/(ISSN)1098-1098 IJITEG 0899-9457 Google Scholar
M. Bertero et al.,
“Image deblurring with poisson data: from cells to galaxies,”
Inverse Prob., 25
(12), 123006
(2009). https://doi.org/10.1088/0266-5611/25/12/123006 INPEEY 0266-5611 Google Scholar
J. W. Goodman, Introduction to Fourier optics, Roberts and Company Publishers, New York
(2005). Google Scholar
A. Eklund, P. Dufort,
“Non-separable 2D, 3D, and 4D filtering with CUDA,”
GPU Pro 5: Advanced Rendering Techniques, 469
–492 AK Peters/CRC Press, New York
(2014). Google Scholar
S. Al Umairy et al.,
“On the use of small 2D convolutions on GPUs,”
Computer Architecture, 52
–64 Springer, Berlin
(2011). Google Scholar
S. Lee et al.,
“Additive light field displays: realization of augmented reality with holographic optical elements,”
ACM Trans. Graphics, 35
(4), 60
(2016). https://doi.org/10.1145/2897824.2925971 ATGRDF 0730-0301 Google Scholar
G. Wetzstein et al.,
“Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting,”
ACM Trans. Graphics, 31
(4), 1
–11
(2012). https://doi.org/10.1145/2185520 ATGRDF 0730-0301 Google Scholar
D. Lanman et al.,
“Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization,”
ACM Trans. Graphics, 29
(6), 163
(2010). https://doi.org/10.1145/1882261 Google Scholar
S. Moon et al.,
“Depth-fused multi-projection display using scattering polarizers,”
Digital Holography and Three-Dimensional Imaging, W2A-18 Optical Society of America, Washington, D.C.
(2017). Google Scholar
M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, Elsevier, New York
(1980). Google Scholar
C. Zhou, S. Lin and S. Nayar,
“Coded aperture pairs for depth from defocus,”
in IEEE 12th Int. Conf. on Computer Vision,
325
–332
(2009). Google Scholar
A. Levin et al.,
“Image and depth from a conventional camera with a coded aperture,”
ACM Trans. Graphics, 26
(3), 70
(2007). https://doi.org/10.1145/1276377 ATGRDF 0730-0301 Google Scholar
J. Liu et al.,
“Light field endoscopy and its parametric description,”
Opt. Lett., 42
(9), 1804
–1807
(2017). https://doi.org/10.1364/OL.42.001804 OPLEDP 0146-9592 Google Scholar
Y. Jeong et al.,
“Real-time depth controllable integral imaging pickup and reconstruction method with a light field camera,”
Appl. Opt., 54
(35), 10333
–10341
(2015). https://doi.org/10.1364/AO.54.010333 APOPAI 0003-6935 Google Scholar
BiographyJonghyun Kim is a research scientist at NVIDIA Research. He received his BS degree from the School of Electrical Engineering, Seoul National University in 2011, and his PhD in the Department of Electrical Engineering and Computer Science, Seoul National University, in 2017. He is the author of more than 20 journal papers and 35 conference papers. His current research interests include light-field microscopy, light-field display, augmented reality display, and holographic display. This work was conducted while he was at Seoul National University as a postdoc researcher. Seokil Moon received his BS degree in electrical engineering from Pohang University of Science and Technology, Pohang, South Korea. Currently, he is working toward his PhD in the School of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea. His current research interests focus on light-field imaging technique and visualization. Youngmo Jeong received his BS degree in electrical and computer engineering from Seoul National University, Korea, in 2013 and is currently working toward his PhD in electrical engineering, Seoul National University, Korea. His primary research interests are in the areas of 3-D display, optical information processing, and augmented reality. Changwon Jang received his BS degree in electrical engineering from Seoul National University, Seoul, Korea, in 2013, where he is currently working toward his PhD at the School of Electrical and Computer Engineering. His primary research interests focus on the areas of 3-D display and digital holography. Youngmin Kim is a senior research engineer at Korea Electronics Technology Institute, Korea. He received his BS degree in 2005 and his PhD in February 2011 in electrical engineering from Seoul National University, Seoul, Korea. He is the author of more than 30 journal papers and has written two book chapters. His current research interests include 3-D display, holography, VR/AR display, and visual fatigue associated with 3-D display. He is a fellow of the Optical Society of Korea. Byoungho Lee received his PhD from University of California at Berkeley in 1993. Since September 1994, he has been in the faculty at the School of Electrical Engineering, Seoul National University, where he is currently the head of the department. He received the Jinbojang National Badge of Korea (2016). He is a fellow of SPIE, OSA, and IEEE and the president-elect of the Optical Society of Korea. |