This study presents a phase-modulation technique capable of flexibly extending the depth-of-field (DOF) of any diffraction pattern. This method, deep diffractive optics (DDO), involves integrating a needle-shaped beam phase modulator with a conventional phase pattern design. Our findings reveal that these current DDOs can significantly extend the DOF over traditional diffractive optics by a factor of five. This method holds broad potential for applications in various optical devices, systems, and emerging fields of photonics.
Holographic near-eye displays are a promising technology to provide realistic and visually comfortable imagery with improved user experience, but their coherent light sources limit the image quality and restrict the types of patterns that can be generated. A partially-coherent mode, supported by emerging fast spatial light modulators (SLMs), has potential to overcome these limitations. However, these SLMs often have a limited phase control precision, which current computer-generated holography (CGH) techniques are not equipped to handle. In this work, we present a flexible CGH framework for fast, highly-quantized SLMs. This framework is capable of incorporating a wide range of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields, and we demonstrate its effectiveness through state-of-the-art simulation and experimental results.
Holographic near-eye displays have the potential to overcome many long-standing challenges for virtual and augmented reality (VR/AR) systems; they can reproduce full 3D depth cues, improve power efficiency, enable compact display systems, and correct for optical aberrations. Despite these remarkable benefits, this technology has been held back from widespread usage due to the limited image quality achieved by traditional holographic displays, the slow algorithms for computer-generated holography (CGH), and current bulky optical setups. Here, we review recent advances in CGH that utilize artificial intelligence (AI) techniques to solve these challenges.
From cameras to displays, visual computing systems are becoming ubiquitous in our daily life. However, their underlying design principles have stagnated after decades of evolution. Existing imaging devices require dedicated hardware that is not only complex and bulky, but also exhibits only suboptimal results in certain visual computing scenarios. This shortcoming is due to a lack of joint design between hardware and software, importantly, impeding the delivery of vivid 3D visual experience of displays. By bridging advances in computer science and optics with extensive machine intelligence strategies, my work engineers physically compact, yet functionally powerful imaging solutions of cameras and displays for applications in photography, wearable computing, IoT products, autonomous driving, medical imaging, and VR/AR/MR. In this talk, I will describe two classes of computational imaging modalities. Firstly, in Deep Optics, we jointly optimize lightweight diffractive optics and differentiable image processing algorithms to enable high-fidelity imaging in domain-specific cameras. Additionally, I will discuss Neural Holography, which also applies the unique combination of machine intelligence and physics to solve long-standing problems of computer-generated holography. Specifically, I will describe several holographic display architectures that leverage the advantages of camera-in-the-loop optimization and neural network model representation to deliver full-color, high-quality holographic images. Driven by trending machine intelligence, these hardware-software jointly optimized imaging solutions can unlock the full potential of traditional cameras and displays and enable next-generation visual computing systems.
Recently, glass-free light field displays of multi-layer architecture have gradually entered the commercial stage. However, for near-eye displays, light field rendering still suffers from expensive computational costs. It can hardly achieve an acceptable framerate for real-time displays. This work develops a novel light field display pipeline that uses two gaze maps to reconstruct display patterns of foveated vision effect. With the acceleration of GPU and the emerging eye-tracking technique, the gaze cone can be updated instantaneously. The experimental results demonstrate that the proposed display pipeline can support near-correct retinal-blur with foveated vision and high framerate at low-computation.
Holographic displays have recently shown remarkable progress in the research field. However, images reconstructed from existing display systems using phase-only spatial light modulators (SLMs) are with noticeable speckles and low contrast due to the non-trivial diffraction efficiency loss. In this work, we investigate a novel holographic display architecture that uses two phase-only SLMs to enable high-quality, contrast-enhanced dis- play experiences. Our system builds on emerging camera-in-the-loop optimization techniques that capture both diffracted and undiffracted light on the image plane with a camera and use this to update the hologram patterns on the SLMs in an iterative fashion. Our experimental results demonstrate that the proposed display architecture can deliver higher-contrast and holographic images with little speckle without the need for extra optical filtering.
Holography has demonstrated potential to achieve a wide field of view, focus supporting, optical see-through augmented reality display in an eyeglasses form factor. Although phase modulating spatial light modulators are becoming available, the phase-only hologram generation algorithms are still imprecise resulting in severe artifacts in the reconstructed imagery. Since the holographic phase retrieval problem is non-linear and non-convex and computationally expensive with the solutions being non-unique, the existing methods make several assumptions to make the phase-only hologram computation tractable. In this work, we deviate from any such approximations and solve the holographic phase retrieval problem as a quadratic problem using complex Wirtinger gradients and standard first-order optimization methods. Our approach results in high-quality phase hologram generation with at least an order of magnitude improvement over existing state-of-the-art approaches.
Time-of- ight depth imaging and transient imaging are two imaging modalities that have recently received a lot of interest. Despite much research, existing hardware systems are limited either in terms of temporal resolution or are prohibitively expensive. Arrays of Single Photon Avalanche Diodes (SPADs) are promising candidates to fill this gap by providing higher temporal resolution at an affordable cost. Unfortunately, state-of-the-art SPAD arrays are only available in relatively small resolutions and low fill-factor. Furthermore, the low fill-factor issue leads to more ill-posed problems when seeking to realize the super-resolution imaging with SPAD array. In this work, we target on hand-crafting the optical structure of SPAD array to enable the super-resolution design of SPAD array. We particularly investigate the scenario of optical coding for SPAD array, including the improvement of fill-factor of SPAD array by assembling microstructures and the direct light modulation using a diffractive optical element. A part of the design work has been applied in our recent advance, where here we show several applications in depth and transient imaging.
Diffractive optical elements (DOEs) are promising lens candidates in computational imaging because they can drastically reduce the size and weight of image systems. The inherent strong dispersion hinders the direct use of DOEs in full spectrum imaging, causing an unacceptable loss of color fidelity. State-of-the-art methods of designing diffractive achromats either rely on hand-crafted point spread functions (PSFs) as the intermediate metric, or frame a differential end-to-end design pipeline that interprets a 2D lens with limited pixels and only a few wavelengths.
In this work, we investigate the joint optimization of achromatic DOE and image processing using a full differentiable optimization model that maps the actual source image to the reconstructed one. This model includes wavelength-dependent propagation block, sensor sampling block, and imaging processing block. We jointly optimize the physical height of DOEs and the parameters of image processing block to minimize the errors over a hyperspectral image dataset. We simplify the rotational symmetric DOE to 1D profle to reduce the computational complexity of 2D propagation. The joint optimization is implemented using auto differentiation of Tensor ow to compute parameter gradients. Simulation results show that the proposed joint design outperforms conventional methods in preserving higher image fidelity.
In general, optical designers employ combinations of multiple lenses with extraordinary dispersion materials to correct chromatic aberrations, which usually leads to considerable volume and weight. In this paper, a tailored design scheme that exploits state-of-the-art digital aberration correction algorithms in addition to traditional optics design is investigated. In particular, the proposed method is applied to the design of refractive telescopes by shifting the burden of correcting chromatic aberrations to software. By tailoring the point spread function in primary optical design for one specified wavelength and then enforcing multi-wavelength information transfer in a post-processing step, the uncorrected chromatic aberrations are well mitigated. Accordingly, a telescope of f-8, 1,400mm focal length, and 0.14° field of view is designed with only two lens elements. The image quality of the designed telescope is evaluated by comparing it to the equivalent designs with multiple lenses in a traditional optical design manner, which validates the effectiveness of our design scheme.
A scalable system to achieve large-sized light field three-dimensional display using multi-projectors and directional diffuser is presented. The system mainly employs an array of mini-projectors projecting images onto a special cylindrical directional diffuser screen. The principle of light field reconstruction, configuration of multi-projectors style, and characteristics of directional diffuser are explicitly analyzed, respectively. A prototype of a piece of equipment in mini-cinema class is proposed, with 100 mini-projectors and a special cylindrical directional diffuser performing different diffuse angles in horizontal and vertical directions. Bright and large-sized three-dimensional images displayed by the system can be observed at different horizontal viewing positions around the cylindrical display area with stereo parallax and motion parallax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.