PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 10219, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Incoherent digital holography plays an important role to expand the application fields of digital holography to bioimaging. There are two configurations to implement the incoherent digital holography. One is the Michelson-type interferometer and the other is to use the common-path configuration using a diffractive optical element. In this invited paper, we briefly present the two configurations of incoherent digital holography and then the experimental results and numerical analysis of common-path incoherent digital holography using the diffractive optical element are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The possibility of incoherent digital holography has been widely studied because it is free from coherent light sources. Here spatially incoherent Fourier digital holography is described. The incoherent hologram is obtained by a rotational shearing interferometer. The hologram obtained by the interferometer is a cosine transform of a spatially incoherent object. After describing the principle of a rotational shearing interferometer, methods to obtain Fourier transform of an object presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Kinect sensor is a device that enables to capture a real scene with a camera and a depth sensor. A virtual model of the scene can then be obtained with a point cloud representation. A complex hologram can then be computed. However, complex data cannot be used directly because display devices cannot handle amplitude and phase modulation at the same time. Binary holograms are commonly used since they present several advantages. Among the methods that were proposed to convert holograms into a binary format, the direct-binary search (DBS) not only gives the best performance, it also offers the possibility to choose the display parameters of the binary hologram differently than the original complex hologram. Since wavelength and reconstruction distance can be modified, compensation of chromatic aberrations can be handled. In this study, we examine the potential of DBS for RGB holographic display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An aperture sharing camera to acquire multiview images are introduced. The camera is built with a mirrorless camera and a high speed LC shutter array which is located at the entrance pupil of the camera’s objective, to divide the pupil into a number of sections with an equal dimension, The LC shutters in the array is opened one at a time in synchronizing with the camera shutter. The images from neighboring shutters reveal a constant disparity between them. The disparity between the images from the camera matches closely with that calculated from theory and is proportional to the distance of the each LC shutter from the camera’s optical axis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integral Imaging is well known for its capability of recording both the spatial and the angular information of threedimensional (3D) scenes. Based on such an idea, the plenoptic concept has been developed in the past two decades, and therefore a new camera has been designed with the capacity of capturing the spatial-angular information with a single sensor and after a single shot. However, the classical plenoptic design presents two drawbacks, one is the oblique recording made by external microlenses. Other is loss of information due to diffraction effects. In this contribution report a change in the paradigm and propose the combination of telecentric architecture and Fourier-plane recording. This new capture geometry permits substantial improvements in resolution, depth of field and computation time
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a measurement tool for binocular eye movement and examined the perception of depth distance in integral photography images, which is a type of three dimensional image, using the tool we developed. Furthermore, we evaluated the perception of the depth distance in integral photography images by the subjective test, and we considered the perception results of the depth distance, which were these two experimental results. Additionally, we examined the perception of the depth distance in the real objects, and compared with the results in the case of integral photography images and real objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simulator which can test the visual perception response of light field displays is introduced. The simulator can provide up to 8 view images to each eye simultaneously to test the differences between different numbers of different view images in supermultiview condition. The images are going through a window with 4 mm width, which is located at the pupil plane of each eye. Since each view image has its own slot in the window, the image is separately entring the eye without overlapping with other images. The simulator shows that the vergence response of viewers' eyes for an image at a certain distance is closer to the real object of the same distance for 4 views than 2 views. This informs that the focusable depth range will increase more as the the number of different view images increases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light-field content is required to provide full-parallax 3D view with dense angular resolution. However, it is very hard to directly capture such dense full-parallax view images using a camera system because it requires specialised micro-lens arrays or a heavy camera-array system. Therefore, we present an algorithm to synthesise full-parallax virtual view images using image-based rendering appropriate for light-field content generation. The proposed algorithm consists of four-directional image warping, view image blending using the nearest view image priority selection and the sum of the weighted inverse Euclidean distance, and hole filling. Experimental results show that dense full-parallax virtual view images can be generated from sparse full-parallax view images with fewer image artefacts. Finally, it is confirmed that the proposed full-parallax view synthesis algorithm can be used for light-field content generation without a dense camera array system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Point Crosstalk is a criteria for representing 3D image quality in glassless 3D display and motion parallax is coupled, a part, with point crosstalk when we consider smoothness of the motion parallax. Therefore we need to find a relationship between point crosstalk and motion parallax. Lowering point crosstalk is important for better 3D image quality but more discrete motion parallax appears at lower point crosstalk at OVD (Optimal Viewing Distance). Therefore another consideration for representing smoothness of motion parallax is necessary. And we analyze average crosstalk for smoother motion parallax as a candidate of another parameter for representing 3D image quality in glassless 3D display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode.
By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Integral imaging (II) combined with photon-counting detection has been researched for three-dimensional (3D) information sensing under low-light-level condition. This paper addresses the nonlinear correlation of photon-counting integral imaging. The first and second order statistical properties of the nonlinear correlation are verified with varying the mean number of photo-counts in the scene. In the experiment, various nonlinearity factors are tested and simulation results are compared with the theoretically driven results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current displays are far from truly recreating visual reality. This requires a full-parallax display that can reproduce radiance field emanated from the real scenes. The develop-ment of such technology will require a new generation of researchers trained both in the physics, and in the biology of human vision. The European Training Network on Full-Parallax Imaging (ETN-FPI) aims at developing this new generation. Under H2020 funding ETN-FPI brings together 8 beneficiaries and 8 partner organizations from five EU countries with the aim of training 15 talented pre-doctoral students to become future research leaders in this area. In this contribution we will explain the main objectives of the network, and specifically the advances obtained at the University of Valencia.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new approach in order to improve the quality of microimages and display them onto an integral imaging monitor. Our main proposal is based on the stereo-hybrid 3D camera system. Originally, hybrid camera system has dissimilarity itself. We interpret our method in order to equalize the hybrid sensor’s characteristics and 3D data modification strategy. We generate integral image by using synthetic back-projection mapping method. Finally, we project the integral image onto our proposed display system. We illustrate this procedure with some imaging experiments in order to prove an advantage of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developing head-mounted displays (HMD) that offer uncompromised optical pathways to both digital and physical worlds without encumbrance and discomfort confronts many grand challenges, both from technological perspectives and human factors. Among the many challenges, minimizing visual discomfort is one of the key obstacles. One of the key contributing factors to visual discomfort is the lack of the ability to render proper focus cues in HMDs to stimulate natural eye accommodation responses, which leads to the well-known accommodation-convergence cue discrepancy problem. In this paper, I will provide a summary on the various optical methods approaches toward enabling focus cues in HMDs for both virtual reality (VR) and augmented reality (AR).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image information encoding using random phase masks produce speckle-like noise distributions when the sample is propagated in the Fresnel domain. As a result, information cannot be accessed by simple visual inspection. Phase masks can be easily implemented in practice by attaching cello-tape to the plain-text message. Conventional 2D-phase masks can be generalized to 3D by combining glass and diffusers resulting in a more complex, physical unclonable function. In this communication, we model the behavior of a 3D phase mask using a simple approach: light is propagated trough glass using the angular spectrum of plane waves whereas the diffusor is described as a random phase mask and a blurring effect on the amplitude of the propagated wave. Using different designs for the 3D phase mask and multiple samples, we demonstrate that classification is possible using the k-nearest neighbors and random forests machine learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new three-dimensional stereo image reconstruction algorithm for a photoacoustic medical imaging system. We also introduce and discuss a new theoretical algorithm by using the physical concept of Radon transform. The main key concept of proposed theoretical algorithm is to evaluate the existence possibility of the acoustic source within a searching region by using the geometric distance between each sensor element of acoustic detector and the corresponding searching region denoted by grid. We derive the mathematical equation for the magnitude of the existence possibility which can be used for implementing a new proposed algorithm. We handle and derive mathematical equations of proposed algorithm for the one-dimensional sensing array case as well as two dimensional sensing array case too. A mathematical k-wave simulation data are used for comparing the image quality of the proposed algorithm with that of general conventional algorithm in which the FFT should be necessarily used. From the k-wave Matlab simulation results, we can prove the effectiveness of the proposed reconstruction algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transcranial Magnetic Stimulation (TMS) is a non-invasive procedure that uses time varying short pulses of magnetic fields to stimulate nerve cells in the brain. In this method, a magnetic field generator (“TMS coil”) produces small electric fields in the region of the brain via electromagnetic induction. This technique can be used to excite or inhibit firing of neurons, which can then be used for treatment of various neurological disorders such as Parkinson’s disease, stroke, migraine, and depression. It is however challenging to focus the induced electric field from TMS coils to smaller regions of the brain. Since electric and magnetic fields are governed by laws of electromagnetism, it is possible to numerically simulate and visualize these fields to accurately determine the site of maximum stimulation and also to develop TMS coils that can focus the fields on the targeted regions. However, current software to compute and visualize these fields are not real-time and can work for only one position/orientation of TMS coil, severely limiting their usage. This paper describes the development of an application that computes magnetic flux densities (h-fields) and visualizes their distribution for different TMS coil position/orientations in real-time using GPU shaders. The application is developed for desktop, commodity VR (HTC Vive), and fully immersive VR CAVETM systems, for use by researchers, scientists, and medical professionals to quickly and effectively view the distribution of h-fields from MRI brain scans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In autostereoscopic display using LASER beam scanning type of multiple projectors, accurate projector calibration is essential to alleviate optical distortions such as keystone distortion. However, calibrating hundreds of projectors with high accuracy takes too much time and effort. Moreover, there exist a limited range where viewers can percept correct depth with respect to human visual system (HVS) although the ideal projector calibration is possible. After fine projector calibration, we explored its accuracy with a brute-force technique, and analyzed depth expression ranges (DER) in the given accuracy with respect to HVS. We set five error conditions for projector calibration accuracy. And then we derive correlation between projector calibration error (PCE) and DER, and determine accuracy of projector calibration affect DER. And we determine that there is no problem in that the observer can perceive the depth of 3D object up to a certain accuracy of projector calibration. From this result, we proposed a perceptive threshold for acceptable projector calibration accuracy for whole system’s efficiency eventually.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The discrete Radon transform, DRT, calculates, with linearithmic complexity, the sum of pixels through a set of discrete lines covering all possible slopes and intercepts in an image. In 2006, a method was proposed to compute the inverse DRT that remains exact and fast, in spite of being iterative. In this work the DRT pair is used to propose a Ridgelet and a Curvelet transform that perform focus measurement of an image. Then the shape from focus approach based on DRT pair is applied to a focal stack to create a depth map of a scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the current technical limitations posed on endeavors to miniaturize lidar systems for use in automotive applications and how to possibly extend those limits. The focus is set on long-range scanning direct time of flight LiDAR systems using APD photodetectors. Miniaturization evokes severe problems in ensuring absolute laser safety while maintaining the systems' performance in terms of maximum range, signal-to-noise ratio, detection probability, pixel density, or frame rate. Based on hypothetical but realistic specifications for an exemplary system the complete lidar signal path is calculated. The maximum range of the system is used as a general performance indicator. It is determined with the minimum signal-to-noise ratio required to detect an object. Various system parameters are varied to find their impact on the system's range. The reduction of the laser's pulse width and the right choice for the transimpedance amplifier's amplification have shown to be practicable measures to double the system's range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Off-axis incoherent digital holography that enables single-shot three-dimensional (3D) distribution is introduced in the paper. Conventional fluorescence microscopy images 3D fields by sectioning, this prevents instant imaging of fast reactions of living cells. In order to realize digital holography from incoherent light, we adapted common path configuration to achieve the best temporal coherence. And by introducing gratings, we shifted the direction of each light to achieve off-axis interference. Simulations and preliminary experiments using LED light have confirmed the results. We expect to use this method to realize 3D phase imaging and fluorescent imaging at the same time from the same biological sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration local sets of points for obtaining one final data set is a vital technology in 3D measurement of large-scale objects. In this paper, a new optical 3D measurement system using finge projection is presented, which is divided into four parts, including moving device, linking camera, stereo cameras and projector. Controlled by a computer, a sequence of local sets of points can be obtained based on temporal phase unwrapping and stereo vision. Two basic principles of place dependance and phase dependance are used to register these local sets of points into one final data set, and bundle adjustment is used to eliminate registration errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview a previously reported head tracking integral imaging three-dimensional (3D) display to extend viewing angle accommodated to a viewer’s position without the crosstalk phenomenon. A head detection system is applied to obtain the head position and rotation of a viewer, and a new set of elemental images is then computed using the smart pseudoscopic-to-orthoscopic conversion (SPOC) method for head tracking 3D display. Experimental results validate the proposed method for high quality 3D display with large viewing angle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present recent progress of the previously reported Multidimensional Optical Sensing and Imaging Systems (MOSIS) 2.0 for target recognition, material inspection and integrated visualization. The degrees of freedom of MOSIS 2.0 include three-dimensional (3D) imaging, polarimetric imaging and multispectral imaging. Each of these features provides unique information about a scene. 3D computationally reconstructed images mitigate the occlusion in front of the object, which can be used for 3D object recognition. The degree of polarization (DoP) of the light reflected from object surface is measured by 3D polarimetric imaging. Multispectral imaging is able to segment targets with specific spectral properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper proposes a new display which could switch 2D and 3D images on a monitor, and we call it as Hybrid Display. In 3D display technologies, the reduction of image resolution is still an important issue. The more angle information offer to the observer, the less spatial resolution would offer to image resolution because of the fixed panel resolution. Take it for example, in the integral photography system, the part of image without depth, like background, will reduce its resolution by transform from 2D to 3D image. Therefore, we proposed a method by using liquid crystal component to quickly switch the 2D image and 3D image. Meanwhile, the 2D image is set as a background to compensate the resolution.. In the experiment, hexagonal liquid crystal lens array would be used to take the place of fixed lens array. Moreover, in order to increase lens power of the hexagonal LC lens array, we applied high resistance (Hi-R) layer structure on the electrode. Hi-R layer would make the gradient electric field and affect the lens profile. Also, we use panel with 801 PPI to display the integral image in our system. Hence, the consequence of full resolution 2D background with the 3D depth object forms the Hybrid Display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose to combine the Kinect and the Integral-Imaging technologies for the implementation of Integral Display. The Kinect device permits the determination, in real time, of (x,y,z) position of the observer relative to the monitor. Due to the active condition of its IR technology, the Kinect provides the observer position even in dark environments. On the other hand, SPOC 2.0 algorithm permits to calculate microimages adapted to the observer 3D position. The smart combination of these two concepts permits the implementation, for the first time we believe, of an Integral Display that provides the observer with color 3D images of real scenes that are viewed with full parallax and which are adapted dynamically to its 3D position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview our recent work [1] on utilizing three-dimensional (3D) optical phase codes for object authentication using the random forest classifier. A simple 3D optical phase code (OPC) is generated by combining multiple diffusers and glass slides. This tag is then placed on a quick-response (QR) code, which is a barcode capable of storing information and can be scanned under non-uniform illumination conditions, rotation, and slight degradation. A coherent light source illuminates the OPC and the transmitted light is captured by a CCD to record the unique signature. Feature extraction on the signature is performed and inputted into a pre-trained random-forest classifier for authentication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We studied expansion method of the three-dimensional viewing freedom of autostereoscopic 3D display with dynamic MVZ under tracking of viewer’s eye. The dynamic MVZ technique can provide three dimensional images with minimized crosstalk when observer move at optimal viewing distance (OVD). In order to be extended to movement in the depth direction of the observer of this technology, it is provided a new pixel mapping method of the left eye and the right eye images at the time of the depth direction movement of the observer. When this pixel mapping method is applied to common autostereoscopic 3D display, the image of the 3D display as viewed from the observer position has the nonuniformed brightness distribution of a constant period in the horizontal direction depending on depth direction distance from OVD. It makes it difficult to provide a three-dimensional image of good quality to the observer who deviates from OVD. In this study, it is simulated brightness distribution formed by the proposed pixel mapping when it is moved in the depth direction away OVD and confirmed the characteristics with the captured photos of two cameras on observer position to simulated two eyes of viewer using a developed 3D display system. As a result, we found that observer can perceive 3D images of same quality as OVD position even when he moves away from it in the developed 3D display system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fast computation of projection images is crucial in many applications such as medical image reconstruction and light field image processing. To do that, parallelization of the computation and efficient implementation of the computation using a parallel processor such as GPGPU (General-Purpose computing on Graphics Processing Units) is essential. In this research, we investigate methods for parallel computation of projection images and efficient implementation of the methods using CUDA (Compute Unified Device Architecture). We also study how to efficiently use the memory of GPU for the parallel processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the theory and experimental results behind using a 3D holographic signal for secure communications. A hologram of a complex 3D object is recorded to be used as a hard key for data encryption and decryption. The hologram is cut in half to be used at each end of the system. One piece is used for data encryption, while the other is used for data decryption. The first piece of hologram is modulated with the data to be encrypted. The hologram has an extremely complex phase distribution which encodes the data signal incident on the first piece of hologram. In order to extract the data from the modulated holographic carrier, the signal must be passed through the second hologram, removing the complex phase contributions of the first hologram. The signal beam from the first piece of hologram is used to illuminate the second piece of the same hologram, creating a self-reconstructing system. The 3D hologram’s interference pattern is highly specific to the 3D object and conditions during the holographic writing process. With a sufficiently complex 3D object used to generate the holographic hard key, the data will be nearly impossible to recover without using the second piece of the same hologram. This method of producing a self-reconstructing hologram ensures that the pieces in use are from the same original hologram, providing a system hard key, making it an extremely difficult system to counterfeit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review our recently published work on a passive three-dimensional (3D) imaging technique known as integral imaging (II) using a long-wave infrared (LWIR) camera for face detection and depth estimation under low light conditions. Multiple two-dimensional images of a scene using a LWIR camera are taken, known as elemental images (EI), with each image having a different perspective of the scene. This information is combined to generate a 3D reconstruction of the scene. A 3D face detection algorithm is used on the 3D reconstructed scene to detect a face behind occlusion and estimate its depth. Experimental results validate the method of detecting a human face behind occlusion and estimating the depth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.