Person re-identification (re-ID) aims at matching two pedestrian images across different cameras. Usually, the main scheme of re-ID based on deep learning includes two phases: feature extraction and metric calculation. We focus on how to extract more discriminative image features for re-ID. To address this problem, we propose a multilevel deep representation fusion (MDRF) model based on the convolutional neural network. Specifically, the MDRF model is designed to extract image features at different network levels through one forward pass. In order to produce the final image representation, these multilevel features are fused by a fusion layer. Then the final image representation is fed into a combined loss of the softmax and the triplet to optimize the model. The proposed method not only utilizes the abstract information of high-level features but also integrates the appearance information of low-level features. Extensive experiments on public datasets including Market-1501, DukeMTMC-reID, and CUHK03 demonstrate the effectiveness of the proposed method for person re-ID. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image fusion
Performance modeling
Cameras
Data modeling
Feature extraction
Space based lasers
Convolutional neural networks