KEYWORDS: 3D metrology, Cameras, Focus stacking, Modulation transfer functions, Depth of field, Projection systems, Sensors, 3D image processing, Stereoscopy
The depth range that can be captured by structured-light 3D sensors is limited by the depth of field of the lenses which are used. Focus stacking is a common approach to extend the depth of field. However, focus variation drastically reduces the measurement speed of pattern projection-based sensors, hindering their use in high-speed applications such as in-line process control. Moreover, the lenses’ complexity is increased by electromechanical components, e.g., when applying electronically tunable lenses. In this contribution, we introduce chromatic focus stacking, an approach that allows for a very fast focus change by designing the axial chromatic aberration of an objective lens in a manner that the depth-of-field regions of selected wavelengths adjoin each other. In order to experimentally evaluate our concept, we determine the distance-dependent 3D modulation transfer function at a tilted edge and present the 3D measurement of a printed circuit board with comparatively high structures.
KEYWORDS: RGB color model, Near infrared, Cameras, Clouds, Calibration, Data modeling, Fermium, Frequency modulation, Image segmentation, Structured light
In our work we propose a deep learning solution to complete RGB-D images that are acquired by a NIR structured light scanner with an additional RGB camera that measures the visible spectrum. Building on works on image inpainting, we designed and trained a neural network architecture that takes the available fringe and color images as well as the reliably measured depth information and completes the depth images. We particularly focus on occlusion-caused image artifacts that naturally occur due to geometric visibility constraints. Hence, we are able to reconstruct a dense depth image from the viewpoint of the RGB camera, which can be used for further post-processing and visualization purposes.
We report about the implementation of the liquid crystal on silicon (LCOS) microdisplay with 1920 by 1080 resolution and 720 Hz frame rate. The driving solution is FPGA-based. The input signal is converted from the ultrahigh-resolution HDMI 2.0 signal into HD frames, which follow with the specified 720 Hz frame rate. Alternatively the signal is generated directly on the FPGA with built-in pattern generator. The display is showing switching times below 1.5 ms for the selected working temperature. The bit depth of the addressed image achieves 8 bit within each frame. The microdisplay is used in the fringe projection-based 3D sensing system, implemented by Fraunhofer IOF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.