KEYWORDS: 3D image processing, Integral imaging, 3D image reconstruction, Visualization, 3D displays, Three dimensional sensing, Reconstruction algorithms, Algorithm development, Cameras, Stereoscopy
We have developed a project-based learning approach with the aim of teaching, education, and undergraduate research in optics and photonics. The proposed project-based learning process is focused on the development of hands-on experiments with 3D light field integral imaging technologies. The research projects enable our undergraduate engineering school students with different levels and majors to gain a deep understanding to optics and photonics through early research experience and student-faculty engagement.
KEYWORDS: Polarimetry, 3D image processing, Cameras, Integral imaging, Signal to noise ratio, Polarization, Stereoscopy, Sensors, Imaging systems, Image sensors
We overview a previously reported three-dimensional (3D) polarimetric integral imaging method and algorithms for extracting 3D polarimetric information in low light environment. 3D integral imaging reconstruction algorithm is first performed to the originally captured two-dimensional (2D) polarimetric images. The signal-to-noise ratio (SNR) of the 3D reconstructed polarimetric image is enhanced comparing with the 2D images. The Stokes polarization parameters are measured and applied for the calculation of the 3D volumetric degree of polarization (DoP) image of the scene. Statistical analysis on the 3D DoP can extract the polarimetric properties of the scene. Experimental results verified the proposed method out performs the conventional 2D polarimetric imaging in low illumination environment.
We overview our recently published multi-dimensional integral imaging-based system for underwater optical signal detection. For robust signal detection, an optical signal propagating through the turbid water is encoded using multiple light sources and coded with spread spectrum techniques. An array of optical sensors captures video sequences of elemental images, which are reconstructed using multi-dimensional integral imaging followed by a 4D correlation to detect the transmitted signal. The area under the curve (AUC) and the number of detection errors were used as metrics to assess the performance of the system. The overviewed system successfully detects an optical signal under higher turbidity conditions than possible using conventional sensing and detection approaches.
We overview a previously reported method for spatial-temporal human gesture recognition under degraded environmental conditions using three-dimensional (3D) integral imaging (InIm) technology with correlation filters. The degraded conditions include low illumination environment and occlusion in front of the human gesture. The human gesture is captured by passive integral imaging, the signal is then processed using computational reconstruction algorithms and denoising algorithms to decrease the noise and remove partial occlusion. Gesture recognition is finally processed using correlation filters. Experimental results show that the proposed approach is promising for human gesture recognition under degraded environmental conditions compared with conventional recognition algorithms.
KEYWORDS: 3D image reconstruction, 3D image processing, Signal to noise ratio, Cameras, Integral imaging, Photons, Visualization, Optical sensors, Facial recognition systems, Sensors
We overview a recently published work that utilizes three-dimensional (3D) integral imaging (InIm) to capture 3D information of a scene in low illumination conditions using passive imaging sensors. An object behind occlusion is imaged using 3D InIm. A computational 3D reconstructed image is generated from the captured scene information at a particular depth plane, which showed the object without occlusion. Moreover, 3D InIm substantially increases the signal-to-noise ratio of the 3D reconstructed scene compared with a single two-dimensional (2D) image as readout noise is minimized. This occurs due to the 3D InIm reconstruction algorithm being naturally optimum in the maximumlikelihood sense in the presence of additive Gaussian noise. After 3D InIm reconstruction, facial detection using the Viola-Jones object detection framework is successful whereas it fails using a single two-dimensional (2D) elemental image.
KEYWORDS: 3D image processing, Cameras, 3D image reconstruction, Integral imaging, 3D surface sensing, 3D modeling, Object recognition, Sensing systems, Nonlinear dynamics, Visualization
We overview a previously reported method for three-dimensional (3D) profilometric reconstruction with occlusion removal based on flexible sensing integral imaging. With flexible sensing, the field-of-view of the image system can be increased by randomly distributing a camera array on a non-planar surface. The camera matrices are estimated using the captured multi-perspective elemental images, and the estimated matrices are used for 3D reconstruction. Object recognition is then implemented on the reconstructed image by nonlinear correlation to detect the 3D position of the object. Finally, an algorithm is proposed to visualize the 3D profile of the object with occlusion removal.
We present recent progress of the previously reported Multidimensional Optical Sensing and Imaging Systems (MOSIS) 2.0 for target recognition, material inspection and integrated visualization. The degrees of freedom of MOSIS 2.0 include three-dimensional (3D) imaging, polarimetric imaging and multispectral imaging. Each of these features provides unique information about a scene. 3D computationally reconstructed images mitigate the occlusion in front of the object, which can be used for 3D object recognition. The degree of polarization (DoP) of the light reflected from object surface is measured by 3D polarimetric imaging. Multispectral imaging is able to segment targets with specific spectral properties.
We overview a previously reported head tracking integral imaging three-dimensional (3D) display to extend viewing angle accommodated to a viewer’s position without the crosstalk phenomenon. A head detection system is applied to obtain the head position and rotation of a viewer, and a new set of elemental images is then computed using the smart pseudoscopic-to-orthoscopic conversion (SPOC) method for head tracking 3D display. Experimental results validate the proposed method for high quality 3D display with large viewing angle.
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
KEYWORDS: 3D displays, 3D image processing, Integral imaging, Imaging arrays, Displays, 3D image reconstruction, Tablets, Image processing, Image quality, Distortion
In this paper, we present a technique to generate an elemental image array to match display devices for three dimensional integral imaging. Experimental results show that our technique can be used to accurately match different display formats and improve the display results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.