Open Access
14 January 2016 Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis
Zhuozheng Wang, J. R. Deller Jr., Blair D. Fleet
Author Affiliations +
Abstract
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.

1.

Introduction

Multisensor image fusion is the process of combining two or more images of a scene to create a single image that is more informative than any of the input images.1 Image-fusion technology is employed in numerous applications including visual interpretation, image drawing, geographical information gathering, and military target reconnaissance and surveillance. In particular, research into techniques for image fusion by contrast reversal in local image regions has important theoretical and practical significance.1

Image-fusion methods are classified as spatial- or transform-domain techniques. Spatial-domain methods are simple, but generally result in images with insufficient detail. Transform-domain strategies based on image-fusion arithmetic and wavelet transformations (WTs) represent the current state of the art. Wavelets can be used to resolve an original image into a series of subimages with different spatial resolutions and frequency-domain characteristics. This representation fully reflects local variations in the original image. In addition, WTs can affect multiresolution analysis,2,3 perfect refactoring, as well as orthogonal features.4 Image-fusion arithmetic based on WT coefficients can flexibly resolve multidimensional low-frequency and high-frequency image components. Wavelet transforms can also realize multisensor image fusion using rules that emphasize critical features of the scene.5,6

Traditional convolution-based WT methods for multiresolution analysis have been widely applied to image fusion for images with a large number of pixels, but the memory and the computational requirements for these techniques, and their Fourier-domain equivalents, can be substantial. Attempts to create more efficient algorithms in the transform domain have employed the lifting wavelet transform (LWT).79 Also known as the second-generation WT,10 the LWT is not dependent upon the Fourier transform. Rather, all operations are carried out in the spatial domain. Image reconstruction is achieved by simply adjusting the calculation and sign orders in the decomposition process,11 thereby reducing two-dimensional image data computation by half, and the data storage to about 75%.

One important motivation for the use of WTs in image processing is their ability to segregate low-frequency content that is critical for interpretation. Traditional image-fusion methods are based on selecting these significant wavelet decomposition coefficients.1214 Even with the effective separation and processing of low-frequency components afforded by WT decomposition, such an approach fails to take into full account the relationships among multiple input images. The result can be adverse fusion effects. Significant information can be lost when local area variance corresponding to pixels across images is small.8,9

Other algorithms use principal component analysis (PCA) to estimate the wavelet coefficients. This method works well in low-noise environments, but PCA breaks down when corruption is severe, even if only very few of the observations are affected.15 For example, consider the two PCA simulation results shown in Fig. 1. Suppose that the light line in Fig. 1(a) represents an object in an image, and that the “×” markers represent samples of that object that have been corrupted by low-level Gaussian noise. The reconstruction of the object from the samples using the classical PCA approach is shown as a heavy line. The results of a similar experiment are shown in Fig. 1(b) where the PCA reconstruction is seriously in error as the result of a single noise outlier in the sampling process.

Fig. 1

PCA reconstructions fails when data are corrupted by large errors: (a) samples corrupted by low-level noise and (b) samples include one noise outlier.

JEI_25_1_013007_f001.png

To remedy shortcomings in the current methods, this paper presents an improved image-fusion algorithm based on the LWT. For low-frequency image components represented in the LWT decomposition, scale coefficients are determined through matrix completion16 instead of PCA. For the high-frequency detail and edge information, the LWT coefficients are chosen through self-adaptive regional variance estimation.

2.

Matrix Completion and Robust Principal Component Analysis

2.1.

Overview

The matrix completion problem has been the subject of intense research in recent years. Candés et al.17 verify that the 0-norm optimization problem is equal to 1-norm optimization under a restricted isometry property. Candés and Recht16 demonstrate exact matrix completion using convex optimization. The “nuclear norm” of the matrix XRN×N,

Eq. (1)

X*=kσk(X),
in which σk(X) denotes the k’th largest singular value, can be used to approximate the matrix rank, ρ(X). The method yields a convex minimization problem for which there are numerous efficient solutions. Candés and Recht16 prove that if the number, S, of sampled entries obeys

Eq. (2)

SCN1.2ρ(X)logN
for some positive constant C, then N×N matrix X can be perfectly recovered with probability 1, by solving a simple convex optimization problem.

Lin and Ma15 report a fast, scalable algorithm for solving the robust PCA (RPCA) problem. The method is based on recovering a low-rank matrix with an unknown fraction of corrupted entries. The mathematical model for estimating the low-dimensional subspace is to find a low-rank matrix. The algorithm proceeds as follows: given a matrix ARM×N with ρ(A)min(M,N), the rank is the target dimension of the subspace. The observation matrix D is modeled as

Eq. (3)

D=PΩ(A)+E,
in which PΩ(·) is a subsampling projection operator and E represents a matrix of unmodeled perturbations that is assumed sparse relative to A.

2.2.

Matrix Completion

The objective of matrix completion is to recover in the low-dimensional subspace the truly low-rank matrix A from D, under the working assumption that E is zero. That is, we seek

Eq. (4)

A=argminARN×MA*,subject to  PΩ(A)=D.
It has been shown that the solution to this convex relaxation represents an exact recovery of the matrix A under quite general conditions.16 Further, the recovery is robust to noise with small magnitude bounds; that is, when the elements of E are small and bounded. For example, if E is a white noise matrix with standard deviation σ, and Frobenius norm EF<ε, then the recovered D will be in a small neighborhood of A with high probability if ε2(M+8M)σ2.18

2.3.

Robust Principal Component Analysis

Conventional PCA is often used to estimate a low-dimensional subspace via the following constrained optimization problem: In the observation model Eq. (5), minimize the difference in the matrices A and D by solving

Eq. (5)

minA,EEF,subject to  ρ(A)r,D=A+E,
where rmin{M,N} is the target dimension of the subspace, and the use of the Frobenius norm represents an assumption that the matrix elements are corrupted by additive i.i.d. Gaussian noise. PCA works well in practice as long as the magnitude of noise is small. To use PCA, the singular value decomposition (SVD) of D is used to project the columns of D onto the subspace spanned by the r principal left singular vectors of D.

RPCA employs an identity operator PΩ(·) and sparse matrix E which differ from those in the matrix completion and PCA approach. Wright et al.19 and Candés et al.20 have shown that, for a sufficiently sparse error matrix, a low-rank matrix A can be recovered exactly from the observation matrix D by solving the following convex optimization problem:

Eq. (6)

A=argminA{A*+λE1},subject to  D=A+E,
where λ is a positive weighting parameter. RPCA has been used for background modeling, removing shadows from face images, alignment of the human face, and video denoising.21,22

In the present paper, RPCA is coupled with the “inexact augmented Lagrange multiplier” (IALM)15 method to determine the low-frequency LWT coefficients for fusion of corrupted images. The IALM method is described in Sec. 3.2 after introducing the general procedure.

3.

Frequency-Domain Fusion Rules

3.1.

Overview

By adopting separate fusion strategies for high- and low-frequency components, the WT can differentially preserve the critical features that accompany these separate bands. The procedure that exploits this property is shown in Fig. 2. The source images are converted to frequency-domain coefficients by the LWT. Frequency-band-dependent fusion rules are applied to the low- and high-frequency components of each image. The inverse lifting wavelet transform (ILWT) is used to reconstruct the fused image.

Fig. 2

Image fusion processing based on wavelet transform.

JEI_25_1_013007_f002.png

3.2.

Low-Frequency Fusion Based on Inexact Augmented Lagrange Multiplier

Weighted average coefficients are often employed to fuse low-frequency wavelet coefficients. This method is effective when the coefficients of the fused images are similar. However, when contrast reversal occurs in local regions of an image, this procedure results in a loss of image detail in the fused image due to reduced contrast. Further, erroneous or missing regions of corrupted images strongly affect PCA results. These inadequacies of the weighted average method and PCA provide the motivation for using RPCA to determine the weighting of low-frequency coefficients.

There is ordinarily little difference in the low-frequency coefficient values extracted by the LWT from different images of the same scene. RPCA coefficients are used to represent low-frequency content in an attempt to preserve fidelity and coherency between the subbands. Algorithms have been developed in this research to solve the RPCA problem that is the basis for the recovery of the low-rank matrix A and the estimation of the sparse matrix E from the observation matrix D. We employ the IALM method to compute the low-frequency subband coefficients. The method is sketched as follows.

Let Γ={IkRN1×N2}k=1K denote a set of corrupted images from K sensors, and let Γ˜={I˜kR(N1×N2)/4L}k=1K be the corresponding set of low-frequency subimages computed using the LWT. L is the number of LWT layers. For simplicity, we assume square images so that N1/4L=N2/4L=defN. Stack all N columns of each I˜k into a single vector of dimension N2, then use these vectors as K columns of a matrix I˜D. After normalizing the data, we denote by ik the (,k) element of I˜D,

Eq. (7)

I˜D=(i11i12i1Ki21i22i2KiN21iN22iN2K).
The cumulative low-frequency subimage matrix is modeled similarly to Eq. (3),

Eq. (8)

I˜D=I˜A+I˜E,
in which I˜ARN2×K denotes the noise-free and integrated low-frequency subimage sequence matrix, and I˜ERN2×K denotes the sparse error matrix from which high-frequency content has been attenuated by the selection of LWT coefficients. The low-frequency LWT coefficients are similar across multiple subimages of the same scene. According to the model, I˜A is noise-free and will ideally, therefore, consist of K identical columns. Accordingly, I˜A will be of low rank as required by the matrix completion procedure. Thus, I˜A can be estimated via matrix completion and RPCA by solving

Eq. (9)

minI˜A,I˜EI˜A*+λPΩ(I˜E)1subject to  I˜A+I˜E=I˜D,
where the augmented Lagrange multiplier is

Eq. (10)

L(I˜A,I˜E,Y,μ)=I˜A*+λPΩ(I˜E)1+Tr{Y,I˜DI˜AI˜E}+μ2I˜DI˜AI˜EF2.
In this equation, λ is an estimated positive weighting parameter representing the proportion of the sparse matrix I˜E in the low-rank matrix I˜A. The default value for this fraction is 1/N. μ is a positive tuning parameter balancing accuracy and computational effort. Tr{A,B} is the trace of the product ATB and Y is the iterated Lagrange multiplier.

A flowchart of the IALM algorithm is shown in Fig. 3. Definitions of the notation used in the flowchart appear in Table 1. The algorithm is recursive with superscript j indicating the iteration number. The quantity I˜A(j)RN2×K is the recovered low-rank matrix for some sufficiently large j, say j. A reasonable strategy for transforming the resulting I˜A(j) to the final low-frequency subimage is to unwrap its first column to form the original N×N image structure. The final low-frequency subimage is denoted I˜.

Fig. 3

Flowchart of operations in the IALM algorithm.

JEI_25_1_013007_f003.png

Table 1

Notation used in the IALM∂ algorithm.

NotationDefinition
I˜DLow-frequency subimage observation matrix
I˜E(j)Error (sparse) matrix, iteration j
I˜A(j)Recovered low-rank subimage matrix, iteration j
Y(j)Lagrange multiplier matrix, iteration j
τMean-squared-error tolerance bound
(X)Singular value decomposition (SVD) of general matrix X
U and VCustomary notation for orthogonal matrices of SVD
SCustomary notation for diagonal matrix of singular values
Sϵ[x]Soft-shrinkage operator applied to scalar x15Sϵ[x]=def{xϵ,x>ϵx+ϵ,x<ϵ0,otherwise,xR,ϵ>0

In this process, Y(0) is initialized to I˜D/max(I˜D2,I˜D); and I˜E(0) is initialized to zero matrix as the same size of I˜D; λ is initialized to 1/m where m is the column size of I˜D; tolerance for stopping criterion τ is initialized to 1×107; and j is set to zero for loop computation.

3.3.

High-Frequency Fusion Based on Self-Adapting Regional Variance Estimation

Processing of high-frequency wavelet coefficients has a direct effect on salient details which affect the overall clarity of the image. As the variance of a subimage characterizes the degree of gray level change in a corresponding image region, the variance is a key indicator in processing of high-frequency components. In addition, there is generally a strong correlation among adjacent pixels in a local area, so that there is significant amount of shared information among neighboring pixels. When variances in corresponding local regions across subimages vary widely, a high-frequency fusion rule for selecting the source image of greatest variance has been shown to be effective at preserving image features.8,9 However, if the local variances of two source images are similar, this method can result in the loss of information by discarding subtle variations among different subimages. An empirical procedure has been developed in which a thresholding procedure is used to segregate local areas that have sufficiently large variance. This allows the entire set to be represented by the single maximum-variance set member. The selection of this difference threshold, ξ, is discussed below.

Let us return to the original set of images Γ={IkRN1×N2}k=1K. Denote by Ik(x,y) the gray-scale value at pixel (x,y) in the k’th image. Also let VkRN1×N2 denote a matrix associated with image Ik in which matrix element Vk(x,y) contains the normalized sample variance of the 3×3 window of pixels centered on pixel (x,y). The normalized sample variance means that all variance values are in the interval [0,1]. Without loss of generality, we select images I1 and I2 with which to describe the steps of the high-frequency fusion algorithm:

  • 1. Compute normalized sample variance matrices V1 and V2. Then Vk(x,y) denotes the normalized variance value of pixel (x,y) in image Ik for k=1, 2.

  • 2. Implement the LWT over L=2 layers against I1, I2, V1, and V2. Multiresolution structures for each matrix are obtained: I1θ, I2θ, V1θ, and V2θ, in which the superscript θ takes one of three designators of direction—horizontal (h), vertical (v) or diagonal (d)—associated with structure matrix

    Eq. (11)

    ΔVθ(x,y)=V1θ(x,y)V2θ(x,y).

    Let ΔV(x,y) denote the sum of the differences in the horizontal, vertical, and diagonal directions

    Eq. (12)

    ΔV(x,y)=[V1h(x,y)V2h(x,y)]+[V1v(x,y)V2v(x,y)]+[V1d(x,y)V2d(x,y)],
    in which Vkθ(x,y) indicates the normalized variance of the k’th image in direction θ.

  • 3. Compare the threshold value and |ΔV(x,y)|. If |ΔV(x,y)|ξ take the pixel value with bigger variance as the wavelet coefficient after fusion; otherwise use a weighted sum to compute the wavelet coefficient, DFθ is the multiresolution structure after fusion, namely

    Eq. (13)

    DFθ(x,y)={I1θ(x,y),when  ΔV(x,y)>0and|ΔV(x,y)|ξ,I2θ(x,y).when  ΔV(x,y)<0and|ΔV(x,y)|ξ,V1θ(x,y)I1θ(x,y)+V2θ(x,y)I2θ(x,y),when  |ΔV(x,y)|<ξ.

    In this study, the value of ξ is set to 0.8. This means that when the normalized variance of the pixel (x,y) in one image is much greater than another, the source image of greater variance is selected. Otherwise, the coefficient is obtained by averaging as in Eq. (13). This fusion rule for high-frequency subimages not only results in the retention of details, but it also prevents the loss of image information caused by redundant data. It ensures the consistency of the fused image.

In summary, IALM is used to determine the low-frequency component to be fused, and self-adapting regional variance is employed to estimate the high-frequency contribution. The fused wavelet coefficients are combined by ILWT to create the final result.

4.

Experimental Results and Analysis

4.1.

Comparison of Robust Principal Component Analysis Algorithms

To validate the new procedure, four groups of experiments results are reported. The objective of the first is to compare the performance of RPCA algorithms with that of IALM. The results are shown in Table 2. Two mainstream algorithms are compared—singular value thresholding (SVT), accelerated proximal gradient with IALM.

Table 2

Comparison of RPCA algorithms.

NAlgorithmrNMSE#SVDTime (s)
500SVT251.35×1047813.72
500APG252.33×1055610.34
500IALM254.73×107343.32
600SVT301.27×1047719.02
600APG302.11×1055816.92
600IALM304.61×107345.64
700SVT351.36×1047424.77
700APG352.25×1055826.25
700IALM354.62×107348.41
800SVT401.26×1047533.95
800APG402.14×1055942.14
800IALM404.30×1073412.09
900SVT451.27×1047542.52
900APG452.03×1056059.24
900IALM454.45×1073416.78
1000SVT501.25×1047352.65
1000APG502.16×1056072.26
1000IALM504.45×1073422.54
2000SVT1001.30×10471257.17
2000APG1002.05×10464387.42
2000IALM1004.39×10734154.43

In this table, the input dataset named observation matrix D of Eq. (6) is of dimension N×N. It has some random missing or broken pixels. For fair comparison, we set r, the rank of A, to 0.05  N, and define the normalized mean squared error (NMSE) as

Eq. (14)

NMSE=DAEFDF.

In Table 2, the column labeled #SVD indicates the number of iterations. The “times” column displays the number of seconds to run the algorithm. The oversampling rate (p/dr) is six, implying 60% downsampling of the data appearing in the observation matrix, in which, dr indicates the number of degrees of freedom in the rank r matrices: dr=r(2Nr). p elements from A are then sampled uniformly to form the known samples in D.16

Among the three algorithms, IALM exhibits superiority performance in all three measures. The results indicate that time increases proportionately with N2. Note, however, that #SVD is not dependent upon N.

4.2.

Fusion of Clean Images

For convenience, we will refer to the new algorithm as IALM. The next two groups of experiments involve processing of left-focus–right-focus images and visible-light–infrared-light images, comparing different image-fusion algorithms with IALM. The source images are not corrupted by noise or errors. The spline 5/3 wavelet basis23 was selected for the LWT process. Through factorization, the equivalent lifting wavelet was obtained. The experimental results are shown in Figs. 4 and 5.

Fig. 4

Multifocus image-fusion experiment: (a) left-focus image, (b) right-focus image, (c) WA_LM, (d) PCNN, (e) PCA, and (f) IALM.

JEI_25_1_013007_f004.png

Fig. 5

Visible light and infrared image-fusion experiment: (a) visible-light image, (b) infrared image, (c) WA_LM, (d) PCNN, (e) PCA, and (f) IALM.

JEI_25_1_013007_f005.png

The first group of source images involves those with eccentric focus, the second contains images of visible contrasting and infrared light. Fig. 4(a) shows a left-focused source image, whereas Fig. 4(b) is right-focused; Fig. 5(a) is a visible-light source image, while Fig. 5(b) uses an infrared source; in Figs. 4(c)4(f) and 5(c)5(f) are, respectively, the fusion results by the weighted average over low frequencies and the absolute value maximum method over high frequencies (WA_AM), weighted average over low frequencies and the local area maximum method over high frequencies (WA_AM), improved pulse-coupled neural networks (PCNN) method,24,25 and PCA-weighted over low frequencies, the self-adaptive regional variance estimation method over high frequencies (PCA), and the algorithm developed in this paper (IALM).

The processed images empirically suggest that a clearer fused image is obtained through (IALM). More detailed information is evident, e.g., in Figs. 4(e) and 4(f) in which the image information on the left edge of the large alarm clock is apparently richer than the same feature in the other three fused images. This also means that algorithm IALM is equally effective to algorithm PCA, even though the algorithm IALM has more detailed information (Table 2). Furthermore, the new algorithm achieves a fusion result with finer detail. For example, the barbed wire in Fig. 5(d) is more clearly visible than the same feature in (c). In Fig. 5, the person in 5(c) is better defined than in 5(d), while in 5(e) and 5(f), both the barbed wire and the person, and even the smoke in the upper-right corner of the image, are easier to identify than in the others. This enhanced clarity admits more effective subsequent processing.

The following objective criteria were evaluated:

  • 1. The “mutual information” (MI) is a measure of statistical dependence that can be interpreted as the amount of information transmitted from the source images to the fused image.26 To assess the MI between source image I1 and the fused image, say IF, we use the estimator

    Eq. (15)

    M1,F=l1,lFh1,F(l1,lF)logh1,F(l1,lF)h1(l1)hF(lF),
    where h1(l1) and hF(lF) represent the normalized histogram of source image I1 and fused image IF, respectively. l1 and lF each take integers indicating one of 28 gray levels {0,1,,255}. h1,F(l1,lF) denote the jointly normalized histogram of I1 and image IF. Similarly, M2,F denotes the mutual information between image I2 and the fused image IF. The MI between the source images I1 and I2 and the fused image IF is

    Eq. (16)

    M1,2,F=M1,F+M2,F.
    A larger MI value indicates that the fused image includes more information from the original images.

  • 2. The “average gradient” (AG), or “clarity,” reflects the preservation of gray level changes in the image. With dimensions N1=N2=N, larger values of AG imply greater clarity and edge preservation. Gray-level differentials are important, e.g., in texture rendering. The AG is defined as

    Eq. (17)

    g¯=1N2x=1N1y=1N1[ΔIx2(x,y)+ΔIy2(x,y)]/2,
    where ΔIx(x,y)=I(x+1,y)I(x,y) and ΔIy(x,y)=I(x,y+1)I(x,y) are the gray value differentials in the coordinate x and y directions, respectively.

  • 3. The “correlation coefficient” (CC) is used to compare two images of the same object (or scene). CC, which measures the correlation (degree of linear coherence) between the original and the fused images, is defined as

    Eq. (18)

    CF,1=x,y[(IF(x,y)I¯F)][(I1(x,y)I¯1)]x,y[IF(x,y)I¯F]2x,y[I1(x,y)I¯1]2,
    where IF(x,y) and I1(x,y) are the gray levels at pixel (x,y) in the fused and original images, and I¯F and I¯1 denote the average gray levels in the two images.

  • 4. The “degree of distortion” (DD), a direct indicator of image fidelity, is defined as

    Eq. (19)

    DF,1=1N1×N2x=1N1y=1N2|IF(x,y)I1(x,y)|,
    in which IF(x,y) and I1(x,y) are as defined above.

  • 5. The QAB/F metric quantifies the amount of edge information transferred from two source images IA and IB to a fused image IF.26 It is calculated as

    Eq. (20)

    QAB/F=x=1N1y=1N2[QAF(x,y)wA(x,y)+QBF(x,y)wB(x,y)]x=1N1y=1N2[wA(x,y)+wB(x,y)],
    where each image is of size N1×N2. α and β represent, respectively, the edge strength and orientation. QAF(x,y) is the product of QαAF(x,y) and QβAF(x,y) which represent, respectively, how well the edge strength and orientation values of a pixel are represented in the fused image IF. Similarly, QBF(x,y) is computed as the product of QαBF(x,y) and QβBF(x,y) which represent, respectively, how well the edge strength and orientation values of a pixel (x,y) in I2 are represented in the fused image IF. wA(x,y) and wB(x,y), respectively, denote the proportion of QAF(x,y) and QBF(x,y), which reflect the importance of QAF(x,y) and QBF(x,y). The dynamic range of QAB/F is between [0 1], and it should be as close to 1 as possible.

  • 6. The “peak signal-to-noise ratio” (PSNR) is an expression for the ratio between the maximum possible power of a signal and the power of distorting noise that affects the quality of its representation. This objective metric is used to compare the effectiveness of algorithms by measuring the proximity of the fused image and the original image. The PSNR is computed as

    Eq. (21)

    PSNR=10lg(L1)2RMSE2,
    where RMSE denotes the root mean square error between the reference and fused images. L=256 is the number of gray levels used in representing an image. A larger PSNR value indicates a better fusion result.

Tables 3 and 4 report the objective performance evaluation measures for the four fusion algorithms.

Table 3

Experimental objective evaluation measures of Fig. 4.

Evaluation indicatorWA_LMPCNNPCA∂IALM∂
MI6.16047.08147.27887.5191
AG4.20676.80896.80966.8089
CC0.97680.97490.98360.9927
DD3.97623.90893.64063.5743
QAB/F0.61330.68970.69870.6929
PSNR22.619528.109528.127031.3846

Table 4

Evaluation comparison of Fig. 5.

Evaluation indicatorWA_LMPCNNPCA∂IALM∂
MI2.75953.75653.89533.8938
AG6.85567.86667.92068.1982
CC0.78730.87290.88080.8976
DD17.100811.001610.925910.2100
QAB/F0.59880.69780.67980.7548
PSNR20.427124.339625.123425.3540

Relative to the other algorithms, IALM obtains the largest MI and AG for the fused images, suggesting that this algorithm can provide fused images with higher information content and better clarity. The objective indicators of fidelity to the source image also favor the IALM and self-adaptive regional variance estimation algorithm performance.

4.3.

Fusion of Corrupted Images

To assess whether IALM is robust to missing data and image corruption, we continue to use clean, multifocus clock images for processing. At a 0.15 error rate, 15% of the pixels of the original image are corrupted, and an additional 15% are missing (gray-level values set to zero). This implies an effective data corruption rate or 30%. The results of the test of the four algorithms are shown in Fig. 6. Figures 6(a) and 6(b) show, respectively, Fig. 4(a) with errors and Fig. 4(b) with errors. Figure 6(c) shows the result of using PCA without a denoising filter, while Fig. 6(d), labeled PCA,F, shows the result of using PCA with an adaptive median filter. The result of using PCNN with an adaptive median filter is labeled PCNNF and appears in Fig. 6(e). To achieve this outcome, we use the adaptive median filtering strategy proposed by Chen and Wu27 to identify pixels corrupted by impulsive noise and replace each damaged pixel by the median of its neighborhood. The adaptive median filter can employ varying window sizes to accommodate different noise conditions and to reduce distortions like excessive thinning or thickening of object boundaries. Figure 6(f) shows results using IALM without denoising. The clarity of result 6(f) relative to those in 6(c), 6(d), and 6(e) is quite apparent. The empirical image quality tracks the improvement in PSNR as reported in the captions. Figures 6(g) and 6(h) show 400% blow ups of portions of 6(e) and 6(f).

Fig. 6

Multifocus corrupted image-fusion experiment: (a) Fig. 4(a) with errors; (b) Fig. 4(b) with errors; (c) PCA (PSNR=17.82); (d) PCA,F (PSNR=19.37) (e) PCNNF (PSNR=20.76); (f) IALM (PSNR=30.33); (g) zoom out 400% of (e); and (h) zoom out 400% of (f).

JEI_25_1_013007_f006.png

These results demonstrate the ability of IALM to recover the missing or erroneous data, while preserving image detail in both corrupted and clean images.

5.

Conclusions

Traditional convolution-based wavelet transform processing for image fusion has shortcomings including large memory requirements and high computational complexity. The approach to fusion taken in this research uses different fusion rules for low-frequency and high-frequency decomposition components represented on a lifting wavelet basis set. Low-frequency components are characterized by the matrix completion and RPCA methods: IALM, whereas the high-frequency components critical for image details are represented by taking into account the variance differences among proximal neighborhoods. Furthermore, strong correlation between pixels in a local area is captured by a self-adaptive regional variance assessment.

Experimental results show that the new algorithm not only improves the amount of information and the correlation between the fused and source images, but also reduces the level of distortion. Significant clarity improvement relative to state-of-the-art methods is also demonstrated for corrupted images.

Acknowledgments

This research was supported in part by the National Natural Science Foundation of China (Grant No. 30970780) and by the General Program of Science and Technology Development Project of Beijing Municipal Education Commission of China (Grant No. KM201110005033). J.D. and D.B. efforts were supported in part by the U.S. National Science Foundation under Cooperative Agreement DBI-0939454. Any opinions, conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the views of the NSF. This work was undertaken in part while Z.W. was a visiting research scholar at the Michigan State University. The authors thank the Beijing University of Technology’s Multimedia Information Processing Lab for assistance.

References

1. 

B. Khaleghi et al., “Multisensor data fusion: a review of the state-of-the-art,” Inf. Fusion, 14 28 –44 (2013). http://dx.doi.org/10.1016/j.inffus.2011.08.001 Google Scholar

2. 

G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. Fusion, 4 (4), 259 –280 (2003). http://dx.doi.org/10.1016/S1566-2535(03)00046-0 Google Scholar

3. 

Y. Chai, H. Li and Z. Li, “Multifocus image fusion scheme using focused region detection and multiresolution,” Opt. Commun., 284 (19), 4376 –4389 (2011). http://dx.doi.org/10.1016/j.optcom.2011.05.046 Google Scholar

4. 

R. K. Sharma and M. Pavel, Probabilistic Model-Based Multisensor Image Fusion, 1 –35 Oregon Graduate Institute of Science and Technology(1999). Google Scholar

5. 

Y. Zheng, “An orientation-based fusion algorithm for multisensor image fusion,” Proc. SPIE, 7710 77100K (2010). http://dx.doi.org/10.1117/12.849656 Google Scholar

6. 

R. Nava, B. Escalante-Ramírez and G. Cristóbal, “A novel multi-focus image fusion algorithm based on feature extraction and wavelets,” Proc. SPIE, 7000 700028 (2008). http://dx.doi.org/10.1117/12.781403 Google Scholar

7. 

C. Ramesh and T. Ranjith, “Fusion performance measures and a lifting wavelet transform based algorithm for image fusion,” Inf. Fusion, 1 317 –320 (2002). http://dx.doi.org/10.1109/ICIF.2002.1021168 Google Scholar

8. 

G. Liu and C. Liu, “A novel algorithm for image fusion based on wavelet multi-resolution decomposition,” J. Optoelectron., 15 334 –347 (2004). Google Scholar

9. 

Z. Qiang and J. Peng, “Remote sensing image fusion based on small wavelet transform’s local variance,” J. Huazhong Univ. Sci. Technol., 6 89 –91 (2003). Google Scholar

10. 

W. Sweldens, “The lifting scheme: a construction of second generation wavelets,” SIAM J. Math. Anal., 29 (2), 511 –546 (1998). http://dx.doi.org/10.1137/S0036141095289051 Google Scholar

11. 

M. Chen and H. Di, “Study on optimal wavelet decomposition level for multi-focus image fusion,” Opto-Electron. Eng., 31 64 –67 (2004). Google Scholar

12. 

Q. Lin and F. Gui, “A novel image fusion algorithm based on wavelet transforms,” Proc. SPIE, 7001 70010M (2008). http://dx.doi.org/10.1117/12.780162 PSISDG 0277-786X Google Scholar

13. 

L. S. Arivazhagan, L. Ganesan and T. Kumar, “A modified statistical approach for image fusion using wavelet transform,” Signal Image Video Process., 3 (2), 137 –144 (2009). http://dx.doi.org/10.1007/s11760-008-0065-4 Google Scholar

14. 

S. El-Khamy et al., “Regularized super-resolution reconstruction of images using wavelet fusion,” Opt. Eng., 44 (9), 097001 (2005). http://dx.doi.org/10.1117/1.2042947 Google Scholar

15. 

Z. Lin and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices,” (2011). Google Scholar

16. 

E. J. Candés and B. Recht, “Exact matrix completion via convex optimization,” Found. Comput. Math., 9 717 –772 (2009). http://dx.doi.org/10.1007/s10208-009-9045-5 Google Scholar

17. 

E. J. Candés, J. K. Romberg and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math., 59 1207 –1223 (2006). http://dx.doi.org/10.1002/cpa.20124 Google Scholar

18. 

E. J. Candés and Y. Plan, “Matrix completion with noise,” Proc. IEEE, 98 925 –936 (2010). http://dx.doi.org/10.1109/JPROC.2009.2035722 Google Scholar

19. 

J. Wright et al., “Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization,” Proc. Neural Inf. Process. Syst., 3 1 –9 (2009). Google Scholar

20. 

E. J. Candés et al., “Robust principal component analysis?,” J. ACM, 58 11 (2011). Google Scholar

21. 

W. Tan, G. Cheung and Y. Ma, “Face recovery in conference video streaming using robust principal component analysis,” in Proc. IEEE Int. Conf. on Image Processing, 3225 –3228 (2011). Google Scholar

22. 

H. Ji et al., “Robust video denoising using low rank matrix completion,” in Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, 1791 –1798 (2010). Google Scholar

23. 

A. Z. Averbuch and V. A. Zheludev, “Image compression using spline based wavelet transforms,” Wavelets Signal Image Anal., 19 341 –376 (2001). Google Scholar

24. 

X. Qu et al., “Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain,” Acta Autom. Sin., 34 1508 –1514 (2008). http://dx.doi.org/10.1016/S1874-1029(08)60174-3 THHPAY 0254-4156 Google Scholar

25. 

Y. Chai, H. F. Li and M. Y. Guo, “Multifocus image fusion scheme based on features of multiscale products and PCNN in lifting stationary wavelet domain,” Opt. Commun., 284 1146 –1158 (2011). http://dx.doi.org/10.1016/j.optcom.2010.10.056 Google Scholar

26. 

C. S. Xydeas and V. Petrovic, “Objective image fusion performance measure,” Electron. Lett., 36 308 –309 (2000). http://dx.doi.org/10.1049/el:20000267 ELTNBK 0013-4759 Google Scholar

27. 

T. Chen and H. Wu, “Adaptive impulse detection using center-weighted median filters,” IEEE Signal Process. Lett., 8 1 –3 (2001). http://dx.doi.org/10.1109/97.889633 Google Scholar

Biography

Zhuozheng Wang is an associate professor at Beijing University of Technology and a visiting scholar at Michigan State University sponsored by the China Scholarship Council. He received his MS and PhD degrees in electronic engineering from Beijing University of Technology in 2005 and 2013. He is the first author of more than 10 academic papers and has written one book chapter. His current research interests include image processing, electroencephalography, and virtual reality technology. He has been a reviewer and is a member of SPIE.

J. R. Deller Jr. is an IEEE fellow and professor of electrical and computer engineering at Michigan State University, where he received the distinguished faculty award in 2004. He received a PhD in biomedical engineering in 1979, an MS degree in electrical and computer engineering in 1976, and an MS degree in biomedical engineering in 1975 from the University of Michigan, and his BS degree in electrical engineering (summa cum laude) in 1974 from Ohio State University. His research interests include statistical signal processing with applications to speech and hearing, genomics, and other aspects of biomedicine.

Blair D. Fleet received her BS degree (summa cum laude) from Morgan State University, Baltimore, MD, in 2010, and her MS degree from Michigan State University in 2012, both in electrical engineering. She is a National Science Foundation graduate research fellowship award recipient, as well as a GEM (the National Consortium for graduate degrees for Minorities in Engineering and Science, Inc.) fellow. She is currently pursuing her PhD in electrical engineering at Michigan State University. Her research interests include merging signal/image processing with the evolutionary computation fields to solve challenging engineering processing problems, especially in the biomedical domain.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Zhuozheng Wang, J. R. Deller Jr., and Blair D. Fleet "Pixel-level multisensor image fusion based on matrix completion and robust principal component analysis," Journal of Electronic Imaging 25(1), 013007 (14 January 2016). https://doi.org/10.1117/1.JEI.25.1.013007
Published: 14 January 2016
Lens.org Logo
CITATIONS
Cited by 15 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image fusion

Principal component analysis

Image processing

Wavelets

Wavelet transforms

Image transmission

Infrared imaging

Back to Top