Due to the influence of domain bias, domain generalization person re-identification models are not capable of generalizing well on unseen domains. The style factor is a critical factor that causes domain bias. To address the task, we propose a style-unaware meta-learning method that is less affected by domain migration. Specifically, we design a style fusion plugin that changes the specific source domain’s style and simulates more diverse domain differences. We add generated style factors to the input images to enhance the model’s generalization performance in unknown domains. This leads the model to pay more attention to the content of inputs and ignore the style changes as much as possible. To maximize the benefits of the model, we particularly combine our modules with a meta-learning algorithm. Moreover, we design a pretext task, the process of sifting samples, which is fundamental to different domains and can be utilized for other domains to learn domain-invariant features, improving the generalization ability of the model. In our model, in addition to those mentioned above, we set a trimming function that builds and fine-tunes the feature space we construct. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Education and training
Performance modeling
Statistical modeling
Data modeling
Image fusion
Visual process modeling
Design