Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

chore: add Dockerfile for inference set-up #109

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu16.04

RUN apt-get -y update
RUN apt-get install -y --no-install-recommends \
build-essential \
git \
libgoogle-glog-dev \
libgtest-dev \
libiomp-dev \
libleveldb-dev \
liblmdb-dev \
libopencv-dev \
libopenmpi-dev \
libsnappy-dev \
libprotobuf-dev \
openmpi-bin \
openmpi-doc \
protobuf-compiler \
python-dev \
python-pip
RUN pip install --upgrade pip
RUN pip install setuptools
RUN pip install --user \
future \
numpy \
protobuf \
typing \
hypothesis
RUN apt-get install -y --no-install-recommends \
libgflags-dev \
cmake

RUN git clone --branch master --recursive https://github.com/pytorch/pytorch.git
RUN pip install typing pyyaml
WORKDIR /pytorch
RUN git submodule update --init --recursive
RUN python setup.py install

RUN git clone https://github.com/facebookresearch/detectron /detectron

# Install Python dependencies
RUN pip install -U pip
RUN pip install -r /detectron/requirements.txt

# Install the COCO API
RUN git clone https://github.com/cocodataset/cocoapi.git /cocoapi
WORKDIR /cocoapi/PythonAPI

ENV PYTHONPATH /usr/local
ENV Caffe2_DIR=/usr/local/lib/python2.7/dist-packages/torch/share/cmake/Caffe2/
ENV PYTHONPATH=${PYTHONPATH}:/pytorch/build
ENV LD_LIBRARY_PATH=/usr/local/lib:${LD_LIBRARY_PATH}

ENV LD_LIBRARY_PATH=/usr/local/lib/python2.7/dist-packages/torch/lib/:${LD_LIBRARY_PATH}
ENV LIBRARY_PATH=/usr/local/lib/python2.7/dist-packages/torch/lib/:${LIBRARY_PATH}

ENV C_INCLUDE_PATH=/usr/local/lib/python2.7/dist-packages/torch/lib/include/:${C_INCLUDE_PATH}
ENV CPLUS_INCLUDE_PATH=/usr/local/lib/python2.7/dist-packages/torch/lib/include/:${CPLUS_INCLUDE_PATH}

ENV C_INCLUDE_PATH=/pytorch/:${C_INCLUDE_PATH}
ENV CPLUS_INCLUDE_PATH=/pytorch/:${CPLUS_INCLUDE_PATH}

ENV C_INCLUDE_PATH=/pytorch/build/:${C_INCLUDE_PATH}
ENV CPLUS_INCLUDE_PATH=/pytorch/build/:${CPLUS_INCLUDE_PATH}

ENV C_INCLUDE_PATH=/pytorch/torch/lib/include/:${C_INCLUDE_PATH}
ENV CPLUS_INCLUDE_PATH=/pytorch/torch/lib/include/:${CPLUS_INCLUDE_PATH}

RUN make install

WORKDIR /detectron

RUN make
#RUN make ops

RUN apt-get -y update \
&& apt-get -y install \
wget \
software-properties-common

# VideoPose3d
# get ffmpeg
RUN add-apt-repository ppa:mc3man/trusty-media
RUN apt-get update
RUN apt-get install ffmpeg
RUN apt-get install frei0r-plugins

RUN git clone https://github.com/facebookresearch/VideoPose3D.git /VideoPose3D
RUN mkdir /VideoPose3D/checkpoint && cd /VideoPose3D/checkpoint
RUN wget https://dl.fbaipublicfiles.com/video-pose-3d/pretrained_h36m_detectron_coco.bin

RUN cp /VideoPose3D/inference/infer_video.py /detectron/tools/
17 changes: 17 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# VideoPose3d Dockerfile

>This dockerfile aims to help to get started with the [Inference](../INFERENCE.md#Step-3:-inferring-2D-keypoints-with-Detectron) for testing VideoPose3d with your own videos.
>
>It conducts [Step 1: Setup](../INFERENCE.md#Step-1:-setup)

It is created for Systems that have a RTX graphic card *(maybe other work as well)* and Cuda10 installed on the host system.

## How to work with docker:
- `docker build -t detectron_cu10:latest .` *(this takes quite some time, don't worry if there pop up some red warnings)*
- `nvidia-docker run --rm -it detectron_cu10:latest python detectron/tests/test_batch_permutation_op.py` *(should say: 2 tests OK)*
- `docker run -itd --name detcu10 --runtime=nvidia detectron_cu10` *(run container in detached mode)*
- `docker exec -it detcu10 /bin/bash` → log into container

→ now you can continue with [Step 2](../INFERENCE.md#Step-2-(optional):-video-preprocessing)

Happy Coding :computer: :tada: