17 October 2024 Style-unaware meta-learning for generalizable person re-identification
Jie Shao, Pengpeng Cai
Author Affiliations +
Abstract

Due to the influence of domain bias, domain generalization person re-identification models are not capable of generalizing well on unseen domains. The style factor is a critical factor that causes domain bias. To address the task, we propose a style-unaware meta-learning method that is less affected by domain migration. Specifically, we design a style fusion plugin that changes the specific source domain’s style and simulates more diverse domain differences. We add generated style factors to the input images to enhance the model’s generalization performance in unknown domains. This leads the model to pay more attention to the content of inputs and ignore the style changes as much as possible. To maximize the benefits of the model, we particularly combine our modules with a meta-learning algorithm. Moreover, we design a pretext task, the process of sifting samples, which is fundamental to different domains and can be utilized for other domains to learn domain-invariant features, improving the generalization ability of the model. In our model, in addition to those mentioned above, we set a trimming function that builds and fine-tunes the feature space we construct.

© 2024 SPIE and IS&T
Jie Shao and Pengpeng Cai "Style-unaware meta-learning for generalizable person re-identification," Journal of Electronic Imaging 33(5), 053048 (17 October 2024). https://doi.org/10.1117/1.JEI.33.5.053048
Received: 15 July 2024; Accepted: 24 September 2024; Published: 17 October 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Performance modeling

Statistical modeling

Data modeling

Image fusion

Visual process modeling

Design

Back to Top