Rethinking Sampling Strategies for Unsupervised Person Re-identification
Xumeng Han, Xuehui Yu, Guorong Li, Jian Zhao, Gang Pan, Qixiang Ye, Jianbin Jiao and Zhenjun Han
IEEE Transactions on Image Processing (TIP) 2023 (arXiv:2107.03024)
git clone https://github.com/wavinflaghxm/GroupSampling.git
cd GroupSampling
python setup.py develop
cd examples && mkdir data
Download the person datasets Market-1501, DukeMTMC-reID, MSMT17. Then unzip them under the directory like:
GroupSampling/examples/data
├── market1501
│ └── Market-1501-v15.09.15
├── dukemtmc
│ └── DukeMTMC-reID
└── msmt17
└── MSMT17_V2
We utilize 1 GTX-2080TI GPU for training.
- Use
--group-n 256
for Market-1501,--group-n 128
for DukeMTMC-reID, and--group-n 1024
for MSMT17.
Market-1501:
CUDA_VISIBLE_DEVICES=0 python examples/train.py -d market1501 --logs-dir logs/market_resnet50 --group-n 256
DukeMTMC-reID:
CUDA_VISIBLE_DEVICES=0 python examples/train.py -d dukemtmc --logs-dir logs/duke_resnet50 --group-n 128
MSMT17:
CUDA_VISIBLE_DEVICES=0 python examples/train.py -d msmt17 --logs-dir logs/msmt_resnet50 --group-n 1024 --iters 800
We recommend using 4 GPUs to train MSMT17 for better performance.
CUDA_VISIBLE_DEVICES=0,1,2,3 python examples/train.py -d msmt17 --logs-dir logs/msmt_resnet50-gpu4 --group-n 1024 -b 256 --momentum 0.1 --lr 0.00005
To evaluate the model, run:
CUDA_VISIBLE_DEVICES=0 python examples/test.py -d $DATASET --resume $PATH
Some examples:
### Market-1501 ###
CUDA_VISIBLE_DEVICES=0 python examples/test.py -d market1501 --resume logs/market_resnet50/model_best.pth.tar
If you find this work useful for your research, please cite:
@article{han2022rethinking,
title={Rethinking Sampling Strategies for Unsupervised Person Re-Identification},
author={Han, Xumeng and Yu, Xuehui and Li, Guorong and Zhao, Jian and Pan, Gang and Ye, Qixiang and Jiao, Jianbin and Han, Zhenjun},
journal={IEEE Transactions on Image Processing},
year={2023},
volume={32},
pages={29-42},
doi={10.1109/TIP.2022.3224325}}
Codes are built upon SpCL. Thanks to Yixiao Ge for opening source.