Skip to content

Commit

Permalink
Bump Dockerfile versions, add user to sudoers list. (nerfstudio-proje…
Browse files Browse the repository at this point in the history
…ct#1448)

* Update Dockerfile, bump CUDA from 11.7.1 to 11.8.0, bump Colmap from 3.7 to 3.8, add user to sudoers list.

* Improve description.

* Fix wrong description and typos.

* Add better description.

* Fix build error of Dockerfile.

* Bump pytorch versions.

---------

Co-authored-by: Nicolas Zunhammer <[email protected]>
  • Loading branch information
Zunhammer and Nicolas Zunhammer authored Feb 19, 2023
1 parent adc70c3 commit 5311b72
Show file tree
Hide file tree
Showing 2 changed files with 35 additions and 19 deletions.
40 changes: 27 additions & 13 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
# Define base image.
FROM nvidia/cuda:11.7.1-devel-ubuntu22.04
FROM nvidia/cuda:11.8.0-devel-ubuntu22.04

# Variables used at build time.
## CUDA architectures, required by Colmap and tiny-cuda-nn.
## NOTE: All commonly used GPU architectures are included and supported here. To speedup the image build process remove all architectures but the one of your explicit GPU. Find details here: https://developer.nvidia.com/cuda-gpus (8.6 translates to 86 in the line below) or in the docs.
ARG CUDA_ARCHITECTURES=90;89;86;80;75;70;61;52;37

# Set environment variables.
## Set non-interactive to prevent asking for user inputs blocking image creation.
ENV DEBIAN_FRONTEND=noninteractive
## Set timezone as it is required by some packages.
ENV TZ=Europe/Berlin
## CUDA architectures, required by tiny-cuda-nn.
## NOTE: All commonly used GPU architectures are included and supported here. To speedup the image build process remove all architectures but the one of your explicit GPU. Find details here: https://developer.nvidia.com/cuda-gpus (8.6 translates to 86 in the line below) or in the docs.
ENV TCNN_CUDA_ARCHITECTURES=90;89;86;80;75;70;61;52;37
## CUDA Home, required to find CUDA in some packages.
ENV CUDA_HOME="/usr/local/cuda"

Expand All @@ -17,6 +19,7 @@ RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
cmake \
curl \
ffmpeg \
git \
libatlas-base-dev \
Expand All @@ -27,22 +30,26 @@ RUN apt-get update && \
libboost-test-dev \
libcgal-dev \
libeigen3-dev \
libflann-dev \
libfreeimage-dev \
libgflags-dev \
libglew-dev \
libgoogle-glog-dev \
libmetis-dev \
libprotobuf-dev \
libqt5opengl5-dev \
libsqlite3-dev \
libsuitesparse-dev \
nano \
protobuf-compiler \
python3.10-dev \
python3-pip \
qtbase5-dev \
sudo \
wget && \
rm -rf /var/lib/apt/lists/*


# Install GLOG (required by ceres).
RUN git clone --branch v0.6.0 https://github.com/google/glog.git --single-branch && \
cd glog && \
Expand All @@ -69,22 +76,28 @@ RUN git clone --branch 2.1.0 https://ceres-solver.googlesource.com/ceres-solver.
rm -rf ceres-solver

# Install colmap.
RUN git clone --branch 3.7 https://github.com/colmap/colmap.git --single-branch && \
RUN git clone --branch 3.8 https://github.com/colmap/colmap.git --single-branch && \
cd colmap && \
mkdir build && \
cd build && \
cmake .. -DCUDA_ENABLED=ON \
-DCUDA_NVCC_FLAGS="--std c++14" && \
-DCUDA_NVCC_FLAGS="--std c++14" \
-DCMAKE_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES} && \
make -j && \
make install && \
cd ../.. && \
rm -rf colmap

# Create non root user and setup environment.
RUN useradd -m -d /home/user -u 1000 user
RUN useradd -m -d /home/user -g root -G sudo -u 1000 user
RUN usermod -aG sudo user
# Set user password
RUN echo "user:user" | chpasswd
# Ensure sudo group users are not asked for a password when using sudo command by ammending sudoers file
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# Switch to new uer and workdir.
USER 1000:1000
USER 1000
WORKDIR /home/user

# Add local user binary folder to PATH variable.
Expand All @@ -93,16 +106,17 @@ SHELL ["/bin/bash", "-c"]

# Upgrade pip and install packages.
RUN python3.10 -m pip install --upgrade pip setuptools pathtools promise
# Install pytorch and submodules.
RUN python3.10 -m pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
# Install tynyCUDNN.
# Install pytorch and submodules (Currently, we still use cu116 which is the latest version for toch 1.12.1 and is compatible with CUDA 11.8).
RUN python3.10 -m pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
# Install tynyCUDNN (we need to set the target architectures as environment variable first).
ENV TCNN_CUDA_ARCHITECTURES=${CUDA_ARCHITECTURES}
RUN python3.10 -m pip install git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch

# Copy nerfstudio folder and give ownership to user.
ADD . /home/user/nerfstudio
USER root
RUN chown -R user:user /home/user/nerfstudio
USER 1000:1000
RUN chown -R user /home/user/nerfstudio
USER 1000

# Install nerfstudio dependencies.
RUN cd nerfstudio && \
Expand Down
14 changes: 8 additions & 6 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,8 +92,8 @@ pip install -e .[docs]
## Use docker image
Instead of installing and compiling prerequisites, setting up the environment and installing dependencies, a ready to use docker image is provided.
### Prerequisites
Docker ([get docker](https://docs.docker.com/get-docker/)) and nvidia GPU drivers ([get nvidia drivers](https://www.nvidia.de/Download/index.aspx?lang=de)), capable of working with CUDA 11.7, must be installed.
The docker image can then either be pulled from [here](https://hub.docker.com/r/dromni/nerfstudio/tags) (replace <version> with the actual version, e.g. 0.1.10)
Docker ([get docker](https://docs.docker.com/get-docker/)) and nvidia GPU drivers ([get nvidia drivers](https://www.nvidia.de/Download/index.aspx?lang=de)), capable of working with CUDA 11.8, must be installed.
The docker image can then either be pulled from [here](https://hub.docker.com/r/dromni/nerfstudio/tags) (replace <version> with the actual version, e.g. 0.1.17)
```bash
docker pull dromni/nerfstudio:<version>
```
Expand All @@ -110,13 +110,15 @@ docker run --gpus all \ # Give the conta
-p 7007:7007 \ # Map port from local machine to docker container (required to access the web interface/UI).
--rm \ # Remove container after it is closed (recommended).
-it \ # Start container in interactive mode.
nerfstudio # Docker image name
dromni/nerfstudio:<tag> # Docker image name if you pulled from docker hub.
<--- OR --->
nerfstudio # Docker image tag if you built the image from the Dockerfile by yourself using the command from above.
```
### Call nerfstudio commands directly
Besides, the container can also directly be used by adding the nerfstudio command to the end.
```bash
docker run --gpus all -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.cache/:/home/user/.cache/ -p 7007:7007 --rm -it # Parameters.
nerfstudio \ # Docker image name
dromni/nerfstudio:<tag> \ # Docker image name
ns-process-data video --data /workspace/video.mp4 # Smaple command of nerfstudio.
```
### Note
Expand All @@ -125,10 +127,10 @@ docker run --gpus all -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.
- Always use full paths, relative paths are known to create issues when being used in mounts into docker.
- Everything inside the container, what is not in a mounted folder (workspace in the above example), will be permanently removed after destroying the container. Always do all your tasks and output folder in workdir!
- The user inside the container is called 'user' and is mapped to the local user with ID 1000 (usually the first non-root user on Linux systems).
- The container currently is based on nvidia/cuda:11.7.1-devel-ubuntu22.04, consequently it comes with CUDA 11.7 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.
- The container currently is based on nvidia/cuda:11.8.0-devel-ubuntu22.04, consequently it comes with CUDA 11.8 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.
- The docker image (respectively Ubuntu 22.04) comes with Python3.10, no older version of Python is installed.
- If you call the container with commands directly, you still might want to add the interactive terminal ('-it') flag to get live log outputs of the nerfstudio scripts. In case the container is used in an automated environment the flag should be discarded.
- The current version of docker is built for multi-architecture (CUDA architectures) use. The target architecture must be defined at build time for tinyCUDNN to be able to compile properly. If your GPU architecture is not covered by the following table you need to replace the number in the line ```ENV TCNN_CUDA_ARCHITECTURES=90;89;86;80;75;70;61;52;37``` to you specific architecture. It also is a good idea to remove all architectures but yours (e.g. ```ENV TCNN_CUDA_ARCHITECTURES=86```) to speedup the docker build a lot.
- The current version of docker is built for multi-architecture (CUDA architectures) use. The target architecture(s) must be defined at build time for Colmap and tinyCUDNN to be able to compile properly. If your GPU architecture is not covered by the following table you need to replace the number in the line ```ARG CUDA_ARCHITECTURES=90;89;86;80;75;70;61;52;37``` to your specific architecture. It also is a good idea to remove all architectures but yours (e.g. ```ARG CUDA_ARCHITECTURES=86```) to speedup the docker build process a lot.

**Currently supported CUDA architectures in the docker image**

Expand Down

0 comments on commit 5311b72

Please sign in to comment.