Recently, deep learning-based methods have been employed in optical measurement. The fringe to phase method based on deep learning can achieve high-precision 3D topography measurement and is applied to various optical metrology tasks, including phase extraction, phase unwrapping, fringe order determination, depth estimation, and other crucial steps. However, it appears simplistic to obtain images of each metrological task from a single fringe pattern. This paper proposes a novel network that effectively extracts the semantic features of fringe patterns by incorporating the design architecture of transformer while retaining the advantages of convolutional networks. The architecture primarily consists of a backbone, decoder, and feature extraction block which enhance the features at different frequencies within a single fringe pattern. The backbone and decoder are specifically designed for wrapped phase prediction tasks. Experimental results demonstrate that the network accurately predicts the wrapped phase from a single fringe pattern. In comparison with previous methods, this paper's approach offers several contributions: an efficient utilization of a new type of encoder for extracting high-level semantic features from fringe patterns; moreover, only a single grayscale image is required as input for the network without relying on color composite images or additional prior information.
In the fringe projection profilometry (FPP), traditionally, no clear mathematical expression was developed to design the sinusoidal fringe patterns for various objects. For this reason, we present an adaptive algorithm to generate the optimum fringe patterns with an oriented bounding box (OBB) and homography transform. Firstly, the features of various objects, which are segmented with deep learning network Mask R-CNN, are represented by the spindle orientation and length of the OBB. Secondly, the adaptive fringe patterns in the field of view of a camera are generated by the fusion with the OBB and the mathematical expression of conventional intensity fringe patterns. Finally, the fringe patterns in the field of view of a camera is transformed into the in the field of view of a projector by homography. Experiments have been carried out to validate the performances of the proposed method.
Absolute phase plays a crucial role in various applications, including camera or projector calibration, stereo matching, structured light measurement, and fringe projection profilometry (FPP). Recently, significant progress has been made in the development of deep learning-based approaches for absolute phase recovery. Many deep neural networks have been created, improved, or directly integrated into the phase retrieval procedure. Analyzing these methods, a common trend is observed in the sequential calculation of wrapped phase, fringe order, and absolute phase. The accuracy of previous results has a direct impact on the subsequent steps, leading to potential error accumulation and reduced recovery speed. To address these challenges, we propose an end-to-end deep learning method based on Res-UNet that directly predicts the absolute phase from a single fringe image without any additional fringe patterns. The presented approach simplifies the procedure of phase unwrapping and overcomes limitations of existing techniques. To note that, to save cost and labor for training the Res-UNet, a novel and virtual digital fringe project system with 3D Studio Max is also established for generating data close to reality. Experiments have been carried out to validate the performances of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.