By Amirreza Shaban, Shray Bansal, Zhen Liu, Irfan Essa and Byron Boots
You can find our paper at https://arxiv.org/abs/1709.03410
If you find OSLSM useful in your research, please consider to cite:
@inproceedings{shaban2017one,
title={One-Shot Learning for Semantic Segmentation},
author={Shaban, Amirreza and Bansal, Shray and Liu, Zhen and Essa, Irfan and Boots, Byron},
journal={British Machine Vision Conference ({BMVC})},
year={2017}
}
We assume you have downloaded the repository into ${OSLSM_HOME} path.
- Install Caffe prerequisites and build the Caffe code (with PyCaffe). See http://caffe.berkeleyvision.org/installation.html for more details
cd ${OSLSM_HOME}
mkdir build
cd build
cmake ..
make all -j8
If you prefer Make, set BLAS to your desired one in Makefile.config. Then run:
cd ${OSLSM_HOME}
make all -j8
make pycaffe
- Update the
$PYTHONPATH
:
export PYTHONPATH=${OSLSM_HOME}/OSLSM/code:${OSLSM_HOME}/python:$PYTHONPATH
-
Download PASCAL VOC dataset: http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
-
Download trained models from: https://gtvault-my.sharepoint.com/:u:/g/personal/ashaban6_gatech_edu/EXS5Cj8nrL9CnIJjv5YkhEgBQt9WAcIabDQv22AERZEeUQ
-
Set
CAFFE_PATH=${OSLSM_HOME}
andPASCAL_PATH
in${OSLSM_HOME}/OSLSM/code/db_path.py
file -
Run the following to test the models in one-shot setting:
cd ${OSLSM_HOME}/OSLSM/os_semantic_segmentation
python test.py deploy_1shot.prototxt ${TRAINED_MODEL} ${RESULTS_PATH} 1000 fold${FOLD_ID}_1shot_test
Where ${FOLD_ID} can be 0,1,2, or 3 and ${TRAIN_MODEL} is the path to the trained caffe model. Please note that we have included different caffe models for each ${FOLD_ID}.
Simillarly, run the following to test the models in 5-shot setting:
cd ${OSLSM_HOME}/OSLSM/os_semantic_segmentation
python test.py deploy_5shot.prototxt ${TRAINED_MODEL} ${RESULTS_PATH} 1000 fold${FOLD_ID}_5shot_test
- For training your own models, we have included all prototxts in
${OSLSM_HOME}/OSLSM/os_semantic_segmentation/training
directory and the vgg pre-trained model can be found insnapshots/os_pretrained.caffemodel
.
You will also need to
-
Download/Prepare SBD dataset (http://home.bharathh.info/pubs/codes/SBD/download.html).
-
Set
SBD_PATH
in${OSLSM_HOME}/OSLSM/code/db_path.py
-
Set the profile to
fold${FOLD_ID}\_train
for our data layer (check the prototxt files and${OSLSM_HOME}/OSLSM/code/ss_datalayer.py
) to work.
The code and models here are available under the same license as Caffe (BSD-2) and the Caffe-bundled models (that is, unrestricted use; see the BVLC model license).
For further questions, you can leave them as issues in the repository, or contact the authors directly:
Amirreza Shaban [email protected]
Shray Bansal [email protected]
Zhen Liu [email protected]
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors.
Check out the project site for all the details like
- DIY Deep Learning for Vision with Caffe
- Tutorial Documentation
- BAIR reference models and the community model zoo
- Installation instructions
and step-by-step examples.
- Intel Caffe (Optimized for CPU and support for multi-node), in particular Xeon processors (HSW, BDW, SKX, Xeon Phi).
- OpenCL Caffe e.g. for AMD or Intel devices.
- Windows Caffe
Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.
Happy brewing!
Caffe is released under the BSD 2-Clause license. The BAIR/BVLC reference models are released for unrestricted use.
Please cite Caffe in your publications if it helps your research:
@article{jia2014caffe,
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
Journal = {arXiv preprint arXiv:1408.5093},
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
Year = {2014}
}