Three-dimensional (3D) imaging with structured light is crucial in diverse scenarios, ranging from intelligent manufacturing and medicine to entertainment. However, current structured light methods rely on projector–camera synchronization, limiting the use of affordable imaging devices and their consumer applications. In this work, we introduce an asynchronous structured light imaging approach based on generative deep neural networks to relax the synchronization constraint, accomplishing the challenges of fringe pattern aliasing, without relying on any a priori constraint of the projection system. To overcome this need, we propose a generative deep neural network with U-Net-like encoder–decoder architecture to learn the underlying fringe features directly by exploring the intrinsic prior principles in the fringe pattern aliasing. We train within an adversarial learning framework and supervise the network training via a statistics-informed loss function. We demonstrate that by evaluating the performance on fields of intensity, phase, and 3D reconstruction. It is shown that the trained network can separate aliased fringe patterns for producing comparable results with the synchronous one: the absolute error is no greater than 8 μm, and the standard deviation does not exceed 3 μm. Evaluation results on multiple objects and pattern types show it could be generalized for any asynchronous structured light scene.
When using traditional phase-shift profilometry for 3D measurement, it is necessary to keep the measured object static during the shooting process. When the measured object is moving, errors will occur if the projection and capture of the fringe image is not fast enough. This paper proposes a new method to reconstruct the moving object by double sampling. A trigger control device is applied to the camera and projector, which ensures that after each projection, two consecutive images are captured before the next projection. Then, the phase information is retrieved by analyzing the relationship between the motion and fringe patterns. Finally, the moving object is retrieved successfully. The proposed method increased the frame rate of the moving object reconstruction.
Recently, curvature filter (CF) has been developed to implicitly minimize curvature for image processing problems such as smoothing and denoising. In this paper, we propose a parallel curvature filter (PCF) that performs on GPU which is much faster than the original CF on CPU. Inspired by Convolution Neural Networks processed by GPU, the convolution operations in curvature filter computation can be similarly paralleled by GPU so that the PCF on a single GPU can process 33.2 Giga pixels per second. Such performance allows it to work in the real-time applications such as video processing and biomedical image processing, where high performance is required. Our experiments confirm the efficiency and effectiveness of the PCF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.