Traditional three-dimensional (3D) calibration targets consist of two or three mutual orthogonal planes (each of the planes contains several control points constituted by corners or circular points) that cannot be captured simultaneously by cameras in front view. Therefore, large perspective distortions exist in the images of the calibration targets resulting in inaccurate image coordinate detection of the control points. Besides, in order to eliminate mismatches, recognition of the control points usually needs manual intervention consuming large amount of time. A new design of 3D calibration target is presented for automatic and accurate camera calibration. The target employs two parallel planes instead of orthogonal planes to reduce perspective distortion, which can be captured simultaneously by cameras in front view. Control points of the target are constituted by carefully designed circular coded markers, which can be used to realize automatic recognition without manual intervention. Due to perspective projection, projections of the circular coded markers’ centers deviate from the centers of their corresponding imaging ellipses. Colinearity of the control points is used to correct perspective distortions of the imaging ellipses. Experiment results show that the calibration target can be automatically and correctly recognized under large illumination and viewpoint change. The image extraction errors of the control points are under 0.1 pixels. When applied to binocular cameras calibration, the mean reprojection errors are less than 0.15 pixels and the 3D measurement errors are less than 0.2mm in x and y axis and 0.5mm in z axis respectively.
Scale Invariant Feature Transform (SIFT) has been proven to perform better on the distinctiveness and robustness than other features. But it cannot satisfy the needs of low contrast images matching and the matching results are sensitive to 3D viewpoint change of camera. In order to improve the performance of SIFT to low contrast images and images with large 3D viewpoint change, a new matching method based on improved SIFT is proposed. First, an adaptive contrast threshold is computed for each initial key point in low contrast image region, which uses pixels in its 9×9 local neighborhood, and then using it to eliminate initial key points in low contrast image region. Second, a new SIFT descriptor with 48 dimensions is computed for each key point. Third, a hierarchical matching method based on epipolar line and differences of key points’ dominate orientation is presented. The experimental results prove that the method can greatly enhance the performance of SIFT to low contrast image matching. Besides, when applying it to stereo images matching with the hierarchical matching method, the correct matches and matching efficiency are greatly enhanced.
At least three stellar images are needed from different points in space with different orientations of the camera and calibration is realized by finding correspondent stars in stellar images. The method doesn't need any knowledge of orientations of the camera and the calibration is only based on the stellar image correspondences. In this method, homography between stellar images induced by stars (called star-homo for short) is used to approximate the infinite homography (called inf-homo for short). It is well known that the inf-homo provides constraints on image of absolute conic (IAC) which is related to camera internal parameters. Therefore, we use star-homo to replace of inf-homo to compute IAC. When IAC is computed, we can decompose camera internal parameters from IAC. When computing IAC, an unknown scale factor exists, this makes the constraints on IAC nonlinear. In order to transform nonlinear equations to linear equations, we precompute the scale factor by initial principal point estimate. The advantage brought by linear equations is that it is easier to calculate and the results are more accurate and robust. The experimental results show that the proposed method is feasible and can calibrate the space camera with high precision. Under 1 pixel star points extraction error, the relative errors of camera internal parameters are below 0.7%.
In order to improve the robustness and real time performance of SURF based image matching algorithms, a constructing method of SURF descriptor based on sector area partitioning in a circular region was proposed and the dimension of descriptors was reduced from 64 to 32. We compute the new descriptor in a circular local region (the radius set to 10s). Firstly, the local region is divided into 8 equal sector areas according to the dominant orientation in inverse time order. Secondly, Define the dominate orientation and its orthogonal orientation as x and y axis of the key-point’s local frame. Thirdly, compute the Haar wavelet response in x and y directions within the key-point local region. In order to reduce the boundary effect and outer noise, Haar wavelet response in the same Grid of different triangle is both assigned to each sector in different weight, and then a gaussian weighting function is used. Compute the histogram of Haar wavelet response and absolute Haar wavelet response, so each sector sub-region constitutes a vector with 4 dimensions. Finally, a descriptor with 32 dimensions is constituted and the descriptor is normalized to achieve illumination invariance. The experimental results indicate that the average matching speed of the new method increase of about 31.18.
In order to increase the operation speed and matching ability of SIFT algorithm, the SIFT descriptor and matching strategy are improved. First, a method of constructing feature descriptor based on sector area is proposed. By computing the gradients histogram of location bins which are parted into 6 sector areas, a descriptor with 48 dimensions is constituted. It can reduce the dimension of feature vector and decrease the complexity of structuring descriptor. Second, it introduce a strategy that partitions the circular region into 6 identical sector areas starting from the dominate orientation. Consequently, the computational complexity is reduced due to cancellation of rotation operation for the area. The experimental results indicate that comparing with the OpenCV SIFT arithmetic, the average matching speed of the new method increase by about 55.86%. The matching veracity can be increased even under some variation of view point, illumination, rotation, scale and out of focus. The new method got satisfied results in gun bore flaw image matching. Keywords: Metrology, Flaw image matching, Gun bore, Feature descriptor
In order to improve the matching accuracy and the level of automation for image mosaic, a matching algorithm based on
SIFT (Scale Invariant Feature Transform) features is proposed as detailed below. Firstly, according to the result of
cursory comparison with the given basal matching threshold, the collection corresponding SIFT features which contains
mismatch is obtained. Secondly, after calculating all the ratio of Euclidean distance from the closest neighbor to the
distance of the second closest of corresponding features, we select the image coordinates of corresponding SIFT features
with the first eight smallest ratios to solve the initial parameters of pin-hole camera model, and then calculate maximum
error σ between transformation coordinates and original image coordinates of the eight corresponding features. Thirdly,
calculating the scale of the largest original image coordinates of the eight corresponding features to the entire image size,
the scale is regarded as control parameter k of matching error threshold. Finally, computing the difference of the
transformation coordinates and the original image coordinates of all the features in the collection of features, deleting the
corresponding features with difference larger than 3kσ. We can then obtain the exact collection of matching features to
solve the parameters for pin-hole camera model. Experimental results indicate that the proposed method is stable and
reliable in case of the image having some variation of view point, illumination, rotation and scale. This new method has
been used to achieve an excellent matching accuracy on the experimental images. Moreover, the proposed method can be
used to select the matching threshold of different images automatically without any manual intervention.
In order to make it possible for an image data acquisition and storage system used for aerial photographic survey to have
a continuous storage speed of 144 MB/s and data storage capacity of 260GB, three main problems have been solved in
this paper. First, with multi-channel synchronous DMA transfer, parallel data storage of four SCSI hard disks is realized.
It solved the problem of the data transfer rate too high for direct storage. Then, to increase the data transfer rate, a high
speed BUS based on LVDS and a SCSI control circuit based on FAS368M were designed. It solved the problem of PCI
BUS limiting the storage speed. Finally, the problem of the SCSI hard disk continuous storage speed declining led by
much time interval between two DMA transfers is solved by optimizing DMA channel. The practical system test shows
that the acquisition and storage system has a continuous storage speed of 150 MB/s and a data storage capacity of 280GB.
Therefore, it is a new storage method for high speed and mass image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.