This code implements an extraction of Bottom-up image features (paper). Based on the original bottom-up attention model and PyTorch implementation of Faster R-CNN.
- Python 3.6
- PyTorch 0.4.0
- CUDA 9.0
Note: CPU version is not supported.
-
Clone the code:
git clone https://github.com/violetteshev/bottom-up-features.git
-
Install PyTorch with pip:
pip install https://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl
or with Anaconda:
conda install pytorch=0.4.0 cuda90 -c pytorch
-
Install dependencies:
pip install -r requirements.txt
-
Compile the code:
cd lib sh make.sh
-
Download the pretrained model from dropbox or google drive and put it in models/ folder.
-
To extract image features and store them in .npy format:
python extract_features.py --image_dir images --out_dir features
-
To save bounding boxes use
--boxes
argument:python extract_features.py --image_dir images --out_dir features --boxes