@inproceedings{OVCOS_ECCV2024,
title={Open-Vocabulary Camouflaged Object Segmentation},
author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
booktitle=ECCV,
year={2024},
}
Note
Details of the proposed OVCamo dataset can be found in the document for our dataset.
- Prepare the training and testing splits: See the document in our dataset for details.
- Set the training and testing splits in the yaml file
env/splitted_ovcamo.yaml
:OVCamo_TR_IMAGE_DIR
: Image directory of the training set.OVCamo_TR_MASK_DIR
: Mask directory of the training set.OVCamo_TR_DEPTH_DIR
: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/depth-train-ovcoser.zipOVCamo_TE_IMAGE_DIR
: Image directory of the testing set.OVCamo_TE_MASK_DIR
: Mask directory of the testing set.OVCamo_CLASS_JSON_PATH
: Path of the json fileclass_info.json
storing class information of the proposed OVCamo.OVCamo_SAMPLE_JSON_PATH
: Path of the json filesample_info.json
storing sample information of the proposed OVCamo.
- Install dependencies:
pip install -r requirements.txt
.- The versions of
torch
andtorchvision
are listed in the comment ofrequirements.txt
.
- The versions of
- Run the script to:
- train the model:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser
; - inference the model:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from <path of the local .pth file.>
.
- train the model:
- Download the pretrained model.
- Run the script:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth
.
- Download our results and unzip it into
<path>/ovcoser-ovcamo-te
. - Run the script:
python .\evaluate.py --pre <path>/ovcoser-ovcamo-te
- Code: MIT LICENSE
- Dataset:
OVCamo by Youwei Pang, Xiaoqi Zhao, Jiaming Zuo, Lihe Zhang, Huchuan Lu is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International