We propose a novel B-spline active contour model based
on image fusion. Compared with conventional active contours, this
active contour has two advantages. First, it is represented by a cubic
B-spline curve, which can adaptively determine the curve parameter’s
step length; and it can also effectively detect and express the
object contour’s corner points. Second, it is implemented in connection
with image fusion. Its external image force is modified as the
weighted sum of two modal image forces, with the two weights in
terms of a local region’s image entropy or image contrast’s standard
deviation. The experiments indicate that this active contour can accurately
detect both the object’s contour edge and the corner points.
Our experiments also indicate that the active contour’s convergence
with a weighted image force by the image contrast’s standard deviation
is more accurate than that of image entropy, restraining the
influence of the texture or pattern.
A novel scheme for an object's surface area measurement is proposed that is suitable for area computation of a planar object with smooth and irregular edges. This scheme consists of four steps. First, a photoelectric image collimation system is devised to obtain a target image of a detected object. Second, a multiscale active contour is applied in the defocusing and focusing image sequence so that it gradually converges to the target's contour edge from coarse to fine scales. Third, for the convergent active contour, two formulas for the area and centroid computation of a closed B-spline curve are applied to compute the image target's area and centroid exactly. Finally, a novel centroid-self-calibration technology is applied, which measures the pixel's size equivalence with the computed centroid and uses a dual-frequency laser to measure the object's true surface area exactly. An experiment on a circular aperture area measurement indicates that this scheme's repetition error decreases to 0.11%, when the control point number of the B-spline active contour is 25.
A detection and tracking scheme of a dynamic contour based on image fusion is proposed. In the scheme, an image fusion based on a wavelet transform is applied, and for the hierarchical fused images, a novel multiresolution dynamic contour is applied. This scheme has three advantages. First, it not only retains the advantage of a monoresolution dynamic contour, but also decreases the strict requirement to an initial contour. Second, the relation formulas for a dynamic contour and the allowed target velocity in different resolutions are derived. According to these formulas, the allowed maximum target velocity is calculated quantitatively in monoresolution and multiresolution cases, proving that this scheme can track a fast target. Finally, more accurate image information of the fused image is applied in this scheme, which increases the detection and tracking accuracy of a dynamic contour.
A novel contour extraction scheme for detecting a moving target is proposed. This scheme consists of three steps. First, motion segmentation is applied respectively to infrared (IR) and visible image sequences to acquire an initial contour of the moving target. Second, dynamic contours are applied to make the initial contour converge to the target's contour with the Newmark-based iteration. Finally, two novel image fusions are applied to restrict the convergent dynamic contour in a visible image with that in an IR image. The first fusion minimizes the B-spline L2 norm's square of the difference of control point vectors in the two modal images without image registration. The second fusion is realized by the revised differential coupling with image registration. A contrasting experiment on image sequence of a moving vehicle indicates that the contour extraction's average error decreases by 58.14% for the first fusion and 65.12% for the second fusion. Both image fusions are implemented with the control point vector of a dynamic contour and are suitable for practical application. Moreover, the Newmark based iteration is contrasted with the Wilson--based iteration, which indicates that its iteration time requirement for convergence decreases by 21.01%.
IR and visible images are commonly used in the detection and tracking of moving target. Novel contour extraction scheme for moving target is proposed in this paper. Firstly, a motion segmentation technology is applied to get the initial contour. Secondly, dynamic contour is used to represent the initial contour and converges to target's contour. Lastly, a novel feature level fusion is proposed, which minimizes norm's square of control point vector's difference in two modal images. Image registration is not needed. Experiment on moving vehicle indicates that for visible image, average contour extraction error decreases by 58.14% after the fusion. Meanwhile, a fast iteration algorithm of dynamic contour based on Newmark method is devised, which is contrasted with Wilson method. Contrasting experiment indicates that its computational complexity decreases by 21.01%. This fusion is implemented only with control point vectors and is suitable for real time processing.
IR and visible sensors are commonly used in tracking and recognition system of targets. Image fusion for these sensors can effectively improve system's accuracy of tracking and detection. But sampling rates of these sensors are usually different so that a new feature level image fusion scheme is devised in this paper. This scheme is universal and can be widely used in the detection and tracking of moving target when sampling rates of these sensors are largely different (i.e. radar and visible image). This fusion scheme is divided into two parts which are asynchronous and synchronous fusion. Target's contour is represented by dynamic contour. In asynchronous fusion, for the sensor with high sampling rate a multiple sequence image fusion method is devised based on statistical filtering model to get measurement estimation of target's contour. Then in synchronous fusion a real-time differential coupling is implemented for the estimation from asynchronous fusion and the image from the sensor with low sampling rate in order to effectively restrict convergent shape of dynamic contour in visible image. Contrasting simulation experiment proves our fusion scheme's efficacy and average tracking error in visible image with fusion has decreased by 68.31%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.