A complete framework for automatic calibration of camera systems with an arbitrary number of image sensors is presented. This new approach is superior to other methods in that it obtains both the internal
and external parameters of camera systems with arbitrary resolutions, focal lengths, pixel sizes, positions and orientations from calibration rigs printed on paper. The only requirement on the placement of the cameras is an overlapping field of view. Although the basic algorithms are suitable for a very wide range of camera models (including OmniView and fish eye lenses) we concentrate on the
camera model by Bouguet (http://www.vision.caltech.edu/bouguetj/). The most important part of the calibration process is the search for the calibration rig, a checkerboard. Our approach is based on the topological analysis of the corner candidates. It is suitable for a wide range of sensors, including OmniView cameras, which is demonstrated by finding the rig in images of such a camera. The internal calibration of each camera is performed as proposed by Bouguet, although this may be replaced with a different model. The
calibration of all cameras into a common coordinate system is an optimization process on the spatial coordinates of the calibration rig. This approach shows significant advantages compared to the method of Bouguet, esp. for cameras with a large field of view. A comparison of our automatic system with the camera calibration toolbox for MATLAB, which contains an implementation of the Bouguet calibration, shows its increased accuracy compared to the manual approach.
In this paper, we will describe a real-time stereo vision algorithm that determines the disparity map of a given scene by an evaluation of the object contours, relying on a reference image displaying the scene without objects. Contours are extracted from the full-resolution absolute difference image between current and reference image by binarization with several locally adaptive thresholds. To estimate disparity values, contour segments extending over several epipolar lines are used. This approach leads to very accurate disparity values. The algorithm can be configured such that no image region that differs from the reference image by more than a given minimum statistical significance is overlooked, which makes it especially suitable for safety applications.
We successfully apply this contour based stereo vision (CBS)algorithm to the task of video surveillance of hazardous areas in the production environment, regarding several thousands of test images. Under the harsh conditions encountered in this setting, the CBS algorithm achieves to faithfully detect objects entering the scene and determine their three-dimensional structure. What is more, it turns out to cope with small objects and very difficult illumination and contrast settings.
This paper presents three novel matching algorithms, where a hypothesis of a 3D object is matched into a 2D image. The three algorithms are compared with respect to speed and precision on some examples.
A hypothesis consists of the object model and its six degrees of freedom. The hypothesis is projected into the image plane using a pinhole camera model. The model of the used object is a feature-attributed 3D geometric model. It contains various local features and their rules of visibility. After the projection into the image plane the local environment of the projected features is searched for the best match value of the various features. There exists
a trade-off between the rigidity of the object and the best-match position of the local features in the image. After the matching a 2D-3D pose estimation is run to get an updated pose from the matching.
Three novel algorithms for matching the local features under the consideration of their geometric formation are decribed in this paper. The first algorithm combines the local features into a graph. The graph is viewed as a network of springs, where the spring forces constraint the object's rigidity. The quality of the local best matches is represented by additional forces introduced into
the nodes of the graph. The second matching algorithm decouples the local features from each other for moving them independently. This does not impose constraints on the rigidity of the object and does not consider the feature quality. The third matching method takes into account the feature quality by using it within the pose estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.