Multi-view clustering has garnered significant attention due to its ability to explore shared information from multiple views. Applications of multi-view clustering include image and video analysis, bioinformatics, and social network analysis, in which integrating diverse data sources enhances data understanding and insights. However, existing multi-view models suffer from the following limitations: (1) directly extracting latent representations from raw data using encoders is susceptible to interference from noise and other factors and (2) complementary information among different views is often overlooked, resulting in the loss of crucial unique information from each view. Therefore, we propose a distinctive double-level deep multi-view collaborative learning approach. Our method further processes the latent representations learned by the encoder through multiple layers of perceptrons to obtain richer semantic information. In addition, we introduce dual-path guidance at both the feature and label levels to facilitate the learning of complementary information across different views. Furthermore, we introduce pre-clustering methods to guide mutual learning among different views through pseudo-labels. Experimental results on four image datasets (Caltech-5V, STL10, Cifar10, Cifar100) demonstrate that our method achieves state-of-the-art clustering performance, evaluated using standard metrics, including accuracy, normalized mutual information, and purity. We compare our proposed method with existing clustering algorithms to validate its effectiveness. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Data modeling
Deep learning
Machine learning
Image enhancement
Data processing
Matrices
Visualization