A multimodal image fusion method based on the joint sparse model (JSM), multiscale dictionary learning, and a structural similarity index (SSIM) is presented. As an effective signal representation technique, JSM is derived from distributed compressed sensing and has been successfully employed in many image-processing applications such as image classification and fusion. The highly redundant single dictionary always has difficulty satisfying the correlations between images in traditional JSM-based image fusion. Therefore, the proposed fusion model learns a more compact multiscale dictionary to effectively combine the multiscale analysis used in nonsubsampled contourlet transformation with the single-scale joint sparse representation used in image domains to solve the issues of single-scale sparse fusion and to improve fusion quality. The experimental results demonstrate that the proposed fusion method obtains the state-of-the-art performances in terms of both subjective visual quality and objective metrics, especially when fusing multimodal images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 11 scholarly publications.
Image fusion
Associative arrays
Medical imaging
Infrared imaging
Infrared radiation
Optical engineering
Image quality