While industry 4.0 thrives, the needs related to three-dimensional assessment of industrial products growths in complexity. Frequently, this complexity cannot be achieved by one single sensor without outstandingly escalating procedural expenses, which can be prohibitive for many applications. Photogrammetry is an imagebased three-dimensional measurement technique which can offer good metrological results, although presenting limitations specially when using the object texture for point correspondence. Employing an infrared camera can imply in advantages when measuring components with low number of surface features or translucent workpieces, but generally they do not have resolution enough to properly reconstruct the object with typical photogrammetry techniques. This scenario is proper for using data fusion of two or more sensors, resulting in a more informative point cloud. Therefore, it is proposed in this work the three-dimensional measurement of a transparent workpiece using data seized from a visible-light camera and an infrared camera. In the proposed approach, a pixel-level image fusion technique based on two-dimensional wavelet decomposition as a step of the registration process is used to combine images from these devices. This procedure is compared with the reconstruction of the object using only the infrared images and only the visible-light images. The results show a more complete point cloud using data fusion compared to use only visible-light or infrared images.
A portable optical measurement system, capable of measuring free form surfaces over large areas and comparing them
with reference surfaces was developed within this work. The system merges passive and active stereo vision. Data is
acquired in both modes for each partial overlapped position of the system, covering all the area of interest of the free
form surface or part being measured.
In passive stereo vision mode, circular targets are used to determine the coarse position of the system (i.e. cameras and
projector) referenced to a global coordinate system defined by the targets. In active stereo vision mode, three-dimensional
point clouds are locally measured and registered in the global coordinate system. The algorithm performs
the calculation of these point clouds into a single intrinsically structured regular mesh, allowing an efficient comparison
between different surfaces because the correspondence of points can be pre-defined. Experimental evaluations, using
different kinds of geometric patterns and calibrated free form surfaces demonstrate the feasibility and the advantages of
the proposed methods.
KEYWORDS: Cameras, Clouds, Calibration, 3D metrology, 3D acquisition, 3D image processing, Image processing, Fringe analysis, Optical spheres, 3D modeling
This paper presents a very simple and effective procedure to combine data from two cameras, and different positions, to
produce clouds of points in regular meshes. The main idea starts by setting two independent coordinates for a node from
a regular mesh. The third coordinate is found by scanning the dependent coordinate across the measurement volume until
the phase values of the fringe patterns, acquired by the cameras, reach the same common value. This approach naturally
produces structured clouds of points independently of the number of cameras used. To measure large or complex
volumes, some marks are distributed by the geometry. Two geometry parts with common marks are measured in
different positions and stitched. Many parts can be measured and stitched, until the complete measurement of geometry.
The final result is a regular mesh of the cloud of points.
Fringe projection has been widely used for 3D geometry measurement in several classes of applications. The basic
system is formed by a fringe projector and a camera. A triangulation algorithm is frequently used for retrieving 3D
information from a scene. Alternatively, two cameras can be used in combination with one fringe projector. This
configuration produces a significant measurement uncertainty improvement since only phase information encoded in the
fringe pattern is used to locate homologue points in the triangulation algorithm and lack of linearity or imperfections of
the fringe projector does not induce measurement errors. However, some parts with complex geometry can not easily
been seen from both cameras in a convenient angle, what limits the applicability of this configuration. Frequently the
clouds of points acquired from such systems are non-structured and, consequently, a non-regular mesh is obtained. This
paper presents a very simple and effective procedure to combine data from multiple cameras to produce clouds of points
in a regular mesh. The main idea starts by setting two independent coordinates for a node of a regular mesh. The third
coordinate is found by scanning the dependent coordinate across the measurement volume until the phase values of the
fringe patterns, acquired for the multiple cameras, reach the same common value. That approach naturally produces
structured clouds of points independently of the number of cameras used. As an example, a 3D shape is acquired by an
ordinary multimedia projector and a set of four low cost webcams. A calibration is necessary to reference the four
webcams into the same coordinate system. For that, a reference object, composed by a set of small spheres in calibrated
positions, is used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.