Single-shot high-speed mapping photography is a powerful tool for studying fast dynamics in diverse applications. Despite much recent progress, existing methods are still strained by the trade-off between sequence depth and light throughput, errors induced by parallax, limited imaging dimensionality, and the potential damage by pulsed illumination. To overcome these limitations, we explore time-varying optical diffraction as a new gating mechanism to obtain ultrahigh imaging speed. Inspired by the pulse front tilt-gated imaging and the space-time duality in optics, we embody the proposed paradigm in the developed diffraction-gated real-time ultrahigh-speed mapping (DRUM) photography. The sweeping optical diffraction envelope generated by the inter-pattern transition of a digital micromirror device enables sequential time-gating at the sub-microsecond level. DRUM photography can capture a transient event in a single exposure at 4.8 million frames per second. We apply it to the investigation of femtosecond laser-induced breakdown in liquid and laser ablation in biological samples.
Learning-based compressed sensing algorithms are popularly used for recovering the underlying datacube of snapshot compressive temporal imaging (SCTI), which is a novel technique for recording temporal data in a single exposure. Despite providing fast processing and high reconstruction performance, most deep-learning approaches are merely considered a substitute for analytical-modeling-based reconstruction methods. In addition, these methods often presume the ideal behaviors of optical instruments neglecting any deviation in the encoding and shearing processes. Consequently, these approaches provide little feedback to evaluate SCTI’s hardware performance, which limits the quality and robustness of reconstruction. To overcome these limitations, we develop a new end-to-end convolutional neural network—termed the deep high-dimensional adaptive net (D-HAN)—that provides multi-faceted process-aware supervision to an SCTI system. The D-HAN includes three joint stages: four dense layers for shearing estimation, a set of parallel layers emulating the closed-form solution of SCTI’s inverse problem, and a U-net structure that works as a filtering step. In system design, the D-HAN optimizes the coded aperture and establishes SCTI’s sensing geometry. In image reconstruction, D-HAN senses the shearing operation and retrieves a three-dimensional scene. D-HAN-supervised SCTI is experimentally validated using compressed optical-streaking ultrahigh-speed photography to image the animation of a rotating spinner at an imaging speed of 20 thousand frames per second. The D-HAN is expected to improve the reliability and stability of a variety of snapshot compressive imaging systems.
Photoluminescence lifetime imaging of upconverting nanoparticles is increasingly featured in recent progress in optical thermometry. Despite remarkable advances in photoluminescent temperature indicators, existing optical instruments lack the ability of wide-field photoluminescence lifetime imaging in real time, thus falling short in dynamic temperature mapping. Here, we have developed single-shot photoluminescence lifetime imaging thermometry (SPLIT), which is developed from a compressed-sensing ultrahigh-speed imaging paradigm. Using the core/shell NaGdF4:Er3+,Yb3+/NaGdF4 upconverting nanoparticles as the lifetime-based temperature indicators, we apply SPLIT in longitudinal wide-field temperature monitoring beneath a thin scattering medium. SPLIT also enables video-rate temperature mapping of a moving biological sample at single-cell resolution.
Streak cameras are popularly used to passively record dynamic events for numerous studies. However, in conventional operation, they are restricted to one-dimensional field of view (FOV) imaging. To overcome this limitation, the multipleshot and the single-shot two-dimensional (2D) steak imaging approaches have been developed. For the former, the (x,y,t) datacube is acquired by combining the conventional manipulation of streak cameras with a scanning operation. For the latter, the (x,y,t) information is obtained by combining streak imaging with other imaging strategies, such as compressed sensing (CS). Despite contributing to many new studies, the multiple-shot methods require a large number of measurements to synthesize the datacube, and the single-shot approaches reduce the spatiotemporal resolutions or the FOV. Here, we overcome these problems by developing streak-camera-based compressed ultrafast tomographic imaging (CUTI), which is a new work mode universally adaptable to most streak cameras. Grafting the principle of computed tomography to the spatiotemporal domain, CUTI uses temporal shearing and spatiotemporal integration to equivalently perform passive projections of a transient event. By leveraging multiple sweep ranges readily available in a standard streak camera and a new CS-based reconstruction algorithm, the datacube of the transient event can be accurately recovered using a few streak images. Compared to the scanning-based multiple-shot 2D streak imaging approaches, CUTI largely reduces the data acquisition time. Compared to the single-shot methods, CUTI eliminates the trade-off between the spatial resolution or the FOV and temporal resolution.
Single-shot two-dimensional (2D) optical imaging of transient scenes is indispensable for numerous areas of study. Among existing techniques, compressed optical-streaking ultrahigh-speed photography (COSUP) uses a cost-efficient design to endow ultra-high frame rates with off-the-shelf CCD and CMOS cameras. Thus far, COSUP’s application scope is limited by the long processing time and unstable image quality in existing analytical-modeling-based video reconstruction. To overcome these problems, we have developed a snapshot-to-video autoencoder (S2V-AE)—a new deep neural network that maps a compressively recorded 2D image to a movie. The S2V-AE preserves spatiotemporal coherence in reconstructed videos and presents a flexible structure to tolerate changes in input data. Implemented in compressed ultrahigh-speed imaging, the S2V-AE enables the development of single-shot machine-learning assisted real-time (SMART) COSUP, which features a reconstruction time of 60 ms and a large sequence depth of 100 frames. SMART COSUP is applied to wide-field multiple-particle tracking at 20 thousand frames-per-second. As a universal computational framework, the S2V-AE is readily adaptable to other modalities in high-dimensional compressed sensing. SMART COSUP is also expected to find wide applications in applied and fundamental sciences.
High-speed three-dimensional (3D) surface imaging by structured-light profilometry is currently driven by industrial needs, medical applications, and entertainment. However, the limitation of pattern projection speed has prevented the structured illumination to reach the kilohertz (kHz) level. The limited bandwidth of the data transmission has prevented the camera from streaming data continuously, which thus has brought difficulties in kHz-level image acquisition, processing, and display of 3D information during the occurrence of dynamic events (i.e., in real time). Besides, the tradeoff between the camera’s sensor readout rate and the activated pixel numbers has strained the existing methods from reaching a large field of view (FOV) at kilohertz (kHz)-level acquisition. To overcome these limitations, we have developed highspeed band-limited illumination profilometry (BLIP) in two configurations. The first configuration, employing a single camera with a CoaXPress interface (CI), enables real-time 3D surface information reconstruction at 1 kHz. The second configuration, employing two cameras with a CI, uses temporally interlaced acquisition (TIA) to improves the 3D imaging over 1000 frames per second on a field of view (FOV) of up to 180×130 mm2 (corresponding to 1180×860 pixels) in captured images. We have demonstrated the systems’ performance by imaging various static and fast-moving 3D objects. CI-BLIP has been applied to fluid mechanics by imaging dynamics of a flag, which allowed observation of the wave propagation, gravity-induced phase mismatch, and asymmetric flapping motion. Meanwhile, TIA-BLIP has empowered the 3D visualization of glass vibration induced by sound. We expect BLIP systems to find diverse scientific and industrial applications.
In this paper, we report a dispersion-eliminated coded-aperture light field (DECALF) imaging system based on digital micromirror devices. Using a dual-DMD design to compensate for dispersion in the entire visible spectrum, the DECALF imaging system captures 1280×1024×5×5 (𝑥, 𝑦, θ,φ) color light field images at 20 Hz. Using three-dimensional (3D) color scenes, we experimentally demonstrate multi-perspective viewing and digital refocusing by DECALF imaging system.
The conventional ultrafast optical imaging methods in the ultraviolet (UV) spectral range are based on pump-probe techniques, which cannot record non-repeatable and difficult-to-produce transient dynamics. Compressed ultrafast photography (CUP), as a single-shot ultrafast optical imaging technique, can capture an entire transient event with a single exposure. However, CUP has been experimentally demonstrated only in visible and near-infrared spectral ranges. Moreover, the requirement to tilt a digital mirror device (DMD) in the system and the limitation of controllable parameters in the reconstruction algorithm also hinder CUP’s performance. To overcome these limitations, we extended CUP to the UV spectrum by integrating a patterned palladium photocathode into a streak camera. This design also nullifies the previous restrictions in DMD-based spatial encoding, improves the system’s compactness, and offers good spectral adaptability. Meanwhile, by replacing the conventional TwIST algorithm with a plug-and-play alternating direction method of multipliers algorithm, the reconstruction process is split into three secondary optimization problems to precisely update the separated variables in different steps, which considerably enhances CUP’s reconstruction quality. The system exhibits a sequence depth of up to 1500 frames with a size of 1750×500 pixels at an imaging speed of 0.5 trillion frames per second. The system’s ability of ultrafast imaging was investigated by recording the process of UV pulses travel through various transmissive targets with a single exposure. We envision that our system will open up many new possibilities in imaging transient UV phenomena.
KEYWORDS: Photography, Optical imaging, CMOS cameras, 3D scanning, Reconstruction algorithms, Real time imaging, Physics, Materials science, Laser scanners, Imaging systems
Single-shot real-time ultra-high-speed imaging is of significance in capturing transient phenomena. Existing techniques fall short in possessing satisfying specifications in the imaging speed, sequence depth, and pixel count. To overcome these limitations, we have developed compressed optical-streaking ultra-high-speed photography (COSUP) that records a scene (x, y, t) by applying the operations of spatial encoding, temporal shearing, and spatiotemporal integrating. The COSUP system possesses an imaging speed of 1.5 million frames per second (fps), a sequence depth of 500 frames, and a pixel count of 0.5 megapixels per frame. COSUP is demonstrated by imaging single laser pulses illuminating through transmissive targets and by tracking a fast-moving object. We envision COSUP to be applied in widespread applications in biomedicine and materials science.
We propose a bandwidth-limited imaging system based on a digital micromirror device (DMD) for three-dimensional (3D) structured light profilometry. By using an error diffusion algorithm with optical low-pass filtering, we obtain high-quality sinusoidal fringe patterns without mirror dithering. An N-step phase-shifting algorithm is then used to recover depth information from objects. Using our bandwidth-limited projector, we demonstrate 3D profilometry of a static object.
Bringing ultrafast temporal resolution to transmission electron microscopy (TEM) has historically been challenging. Despite significant recent progress in this direction, it remains difficult to achieve sub-nanosecond temporal resolution with a single electron pulse imaging. To address this limitation, here, we propose a methodology that combines laserassisted TEM with computational imaging methodologies based on compressed sensing (CS). In this technique, a twodimensional (2D) transient event [i.e. (x, y) frames that vary in time] is recorded through a CS paradigm. The 2D streak image generated on a camera is used to reconstruct the datacube of the ultrafast event, with two spatial and one temporal dimensions, via a CS-based image reconstruction algorithm. Using numerical simulation, we find that the reconstructed results are in good agreement with the ground truth, which demonstrates the applicability of CS-based computational imaging methodologies to laser-assisted TEM. Our proposed method, complementing the existing ultrafast stroboscopic and nanosecond single-shot techniques, opens up the possibility for single-shot, spatiotemporal imaging of irreversible structural phenomena with sub-nanosecond temporal resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.