This is the official inference code for Polygon-RNN++ (CVPR-2018). For technical details, please refer to:
An official pytorch reimplementation with training/tool code is available here
Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++
David Acuna*, Huan Ling*, Amlan Kar*, Sanja Fidler (* denotes equal contribution)
CVPR 2018
[Paper] [Video] [Project Page] [Demo] [Training/Tool Code]
- Clone the repository
git clone https://github.com/davidjesusacu/polyrnn && cd polyrnn
- Install dependencies
(Note: Using a GPU (and tensorflow-gpu) is recommended. The model will run on a CPU, albeit slowly.)
virtualenv env
source env/bin/activate
pip install -r requirements.txt
- Download the pre-trained models and graphs (448 MB)
(These models were trained on the Cityscapes Dataset)
./models/download_and_unpack.sh
- Run demo_inference.sh
./src/demo_inference.sh
This should produce results in the output/ folder that look like
Checkout the ipython notebook that provides a simple walkthrough demonstrating how to run our model on sample input image crops
If you use this code, please cite:
@inproceedings{AcunaCVPR18,
title={Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++},
author={David Acuna and Huan Ling and Amlan Kar and Sanja Fidler},
booktitle={CVPR},
year={2018}
}
@inproceedings{CastrejonCVPR17,
title = {Annotating Object Instances with a Polygon-RNN},
author = {Lluis Castrejon and Kaustav Kundu and Raquel Urtasun and Sanja Fidler},
booktitle = {CVPR},
year = {2017}
}