Each camera shares its field of view with other cameras to handle occlusions and to enable multi-view vision. We aim at using already installed hardware found in many modern public parking garages. The system’s pipeline starts with the synchronized image capturing process separately for each camera. In the next step, moving objects are selected by a foreground segmentation approach. Subsequently, the foreground objects from a single camera are transformed into view rays in a common world coordinate system and are joined to receive plausible object hypotheses. This transformation requires a one-time initial intrinsic and extrinsic calibration beforehand. Afterwards, these view rays are filtered temporally to arrive at continuous object tracks. In our experiments we used a precise LIDAR-based reference system to evaluate and quantify the proposed system’s precision with a mean localization accuracy of 0.24m for different scenarios. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 7 scholarly publications.
Cameras
Image segmentation
Imaging systems
Image processing
Calibration
Image enhancement
Surveillance systems