Code for paper Uncertainty Reduction for Model Adaptation in Semantic Segmentation at CVPR 2021
In this package, we provide our PyTorch code for out CVPR 2021 paper on Model Adaptation for Segmentation. If you use our code, please cite us:
@inproceedings{teja2021uncertainty,
author={S, Prabhu Teja and Fleuret, François},
booktitle={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
title={Uncertainty Reduction for Model Adaptation in Semantic Segmentation},
year={2021},
volume={},
number={},
pages={9608-9618},
doi={10.1109/CVPR46437.2021.00949}}
The PDF version of the paper is available here.
We use PyTorch for the experiments. The conda environment required to run these codes can be installed by
conda create --name ucr --file spec-file.txt
While we aren't aware of any python version specific idiosyncracies, we tested this on Python 3.7 on Debian, with the above spec-file.txt
. If you find any missing details, or have trouble getting it to run, please create an issue.
We use the pretrained models provided by MaxSquareLoss at https://drive.google.com/file/d/1QMpj7sPqsVwYldedZf8A5S2pT-4oENEn/view into a folder named pretrained
First, the paths to the Cityscapes dataset has to be set in datasets/new_datasets.py
in the dataset's constructor. The path to NTHU cities dataset can be set in utils/argparser.py
in line 15 at DATA_TGT_DIRECTORY
or can be added to the command line call at with --data-tgt-dir
. The code trains the network and evaluates its performance and writes it into the log file in the savedir
called training_logger
.
Then code can be run with
python do_segm.py --city {city} --no-src-data --freeze-classifier --unc-noise --lambda-ce 1 --lambda-ent 1 --save {savedir} --lambda-ssl 0.1
where city
can in Rome
or Rio
or Tokyo
or Taipei
, and savedir
is the path to save the logs and models.
This code borrows parts from MaxSquareLoss (the network definitions, and pretrained models) and CRST (class balanced pseudo-label generation). The author thanks Evann Courdier for parts of the clean datasets
code.
This software is distributed with the MIT license which pretty much means that you can use it however you want and for whatever reason you want. All the information regarding support, copyright and the license can be found in the LICENSE
file in the repository.