Paper
3 April 2000 Mesh-based integration of range and color images
Yiyong Sun, Christophe Dumont, Mongi A. Abidi
Author Affiliations +
Abstract
This paper discusses the construction of photorealistic 3D models from multisensor data. The data typically comprises multiple views of range and color images to be integrated into a unified 3D model. The integration process uses a mesh-based representation of the range data and the advantages of the mesh-based approach over a volumetric approach are mentioned. First, two meshes, corresponding to range images taken from two different viewpoints, are registered to the same world coordinate system and then integrated. This process is repeated until all views have been integrated. The integration is straightforward unless the two triangle meshes overlap. The overlapped measurements are detected and the less confident triangles are removed based on their distance from and orientation relative to the camera viewpoint. After removing the overlapping patches, the meshes are seamed together to build a single 3D model. The model is incrementally updated after each new viewpoint is integrated. The color images are used as texture in the finished scene model. The results show that the approach is efficient for the integration of large, multimodal data sets.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yiyong Sun, Christophe Dumont, and Mongi A. Abidi "Mesh-based integration of range and color images", Proc. SPIE 4051, Sensor Fusion: Architectures, Algorithms, and Applications IV, (3 April 2000); https://doi.org/10.1117/12.381624
Lens.org Logo
CITATIONS
Cited by 19 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
3D modeling

Data modeling

3D image processing

Reconstruction algorithms

Scanners

3D metrology

Distance measurement

Back to Top