Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
* Add the following fixes to documentation:

* Fix trailing backslash in “Use docker image” section
* Replace escaped backslash with double-escaped backslash (see Sphinx issue: sphinx-doc/sphinx#6730)
* Close backtick in `contributing.md`
* Add “run” to instructions for “Render Video”
* Remove unnecessary “,” in “Render Video”
* Correct “before preceding” to “before proceeding”
* Correct “viewpoinrts” to “viewpoints”
* Add missing “.” in “Process data”
* Specify “bash” for code blocks (for consistency with other blocks in the page)
* Fix code block in `viewer_quickstart.rst` (”Copy to clipboard” widget now works)
* Fix some code formatting issues in `export_geometry.md`

* Correct "harmoincs" to "harmonics" in docs.
  • Loading branch information
Mason-McGough authored Jan 17, 2023
1 parent 110c478 commit fb8a57d
Show file tree
Hide file tree
Showing 8 changed files with 39 additions and 39 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -162,9 +162,9 @@ Once you have a NeRF model you can either render out a video or export a point c

### Render Video

First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and the command to generate the video.
First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and run the command to generate the video.

Other video export options are available, learn more by running,
Other video export options are available, learn more by running

```bash
ns-render --help
Expand All @@ -174,7 +174,7 @@ ns-render --help

While NeRF models are not designed to generate point clouds, it is still possible. Navigate to the "EXPORT" tab in the 3D viewer and select "POINT CLOUD". If the crop option is selected, everything in the yellow square will be exported into a point cloud. Modify the settings as desired then run the command at the bottom of the panel in your command line.

Alternatively you can use the CLI without the viewer. Learn about the export options by running,
Alternatively you can use the CLI without the viewer. Learn about the export options by running

```bash
ns-export pointcloud --help
Expand All @@ -198,7 +198,7 @@ Using an existing dataset is great, but likely you want to use your own data! We

### Training models other than nerfacto

We provide other models than nerfacto, for example if you want to train the original nerf model, use the following command,
We provide other models than nerfacto, for example if you want to train the original nerf model, use the following command

```bash
ns-train vanilla-nerf --data DATA_PATH
Expand Down
3 changes: 2 additions & 1 deletion docs/nerfology/model_components/visualize_encoders.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -415,12 +415,13 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"(spherical)=\n",
"## Spherical Harmonic Encoding\n",
"Encode direction using spherical harmoincs. (Mostly used to encode viewing direction)"
"Encode direction using spherical harmonics. (Mostly used to encode viewing direction)"
]
},
{
Expand Down
10 changes: 5 additions & 5 deletions docs/quickstart/custom_dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ cd vcpkg

Nerfstudio can also be trained directly from captures from the [Polycam app](https://poly.cam//). This avoids the need to use COLMAP. Polycam's poses are globally optimized which make them more robust to drift (an issue with ARKit or SLAM methods).

To get the best results, try to reduce motion blur as much as possible and try to view the target from as many viewpoinrts as possible. Polycam recommends having good lighting and moving the camera slowly if using auto mode. Or, even better, use the manual shutter mode to capture less blurry images.
To get the best results, try to reduce motion blur as much as possible and try to view the target from as many viewpoints as possible. Polycam recommends having good lighting and moving the camera slowly if using auto mode. Or, even better, use the manual shutter mode to capture less blurry images.

:::{admonition} Note
:class: info
Expand Down Expand Up @@ -210,15 +210,15 @@ After downloading the app, `Developer Mode` needs to be enabled. A toggle can be

3. Tap the `+` button to create a new capture.

4. Choose `Camera pose` as the capture option
4. Choose `Camera pose` as the capture option.

5. Capture the scene and provide a name.

6. After processing is complete, export the scene. It will be sent to your email.

7. Unzip the file and run the training script (`ns-process-data` is not necessary).

```
```bash
ns-train nerfacto --data {kiri output directory}
```

Expand Down Expand Up @@ -249,7 +249,7 @@ ns-process-data record3d --data {data directory} --output-dir {output directory}

4. Train with nerfstudio!

```
```bash
ns-train nerfacto --data {output directory}
```

Expand Down Expand Up @@ -281,6 +281,6 @@ ns-process-data metashape --data {data directory} --xml {xml file} --output-dir

4. Train with nerfstudio!

```
```bash
ns-train nerfacto --data {output directory}
```
32 changes: 15 additions & 17 deletions docs/quickstart/export_geometry.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,33 +6,31 @@ Here we document how to export point clouds and meshes from nerfstudio. The main

### 1. TSDF Fusion

TSDF (truncated signed distance function) Fusion is a meshing algorithm that uses depth maps to extract a surface as a mesh. This method works for all models.
TSDF (truncated signed distance function) Fusion is a meshing algorithm that uses depth maps to extract a surface as a mesh. This method works for all models.

```python
ns-export tsdf --load-config CONFIG.yml --output-dir OUTPUT_DIR
```
```python
ns-export tsdf --load-config CONFIG.yml --output-dir OUTPUT_DIR
```

### 2. Poisson surface reconstruction

Poisson surface reconstruction gives the highest quality meshes. See the steps below to use Poisson surface reconstruction in our repo.
Poisson surface reconstruction gives the highest quality meshes. See the steps below to use Poisson surface reconstruction in our repo.

:::{admonition} Note
:class: info
> **Note:**
> This will only work with a Model that computes or predicts normals, e.g., nerfacto.
This will only work with a Model that computes or predicts normals, e.g., nerfacto.
:::

1. Train nerfacto with network settings that predict normals.
1. Train nerfacto with network settings that predict normals.

```bash
ns-train nerfacto --pipeline.model.predict-normals True
```
```bash
ns-train nerfacto --pipeline.model.predict-normals True
```

2. Export a mesh with the Poisson meshing algorithm.
2. Export a mesh with the Poisson meshing algorithm.

```bash
ns-export poisson --load-config CONFIG.yml --output-dir OUTPUT_DIR
```
```bash
ns-export poisson --load-config CONFIG.yml --output-dir OUTPUT_DIR
```

## Exporting a point cloud

Expand Down
10 changes: 5 additions & 5 deletions docs/quickstart/first_nerf.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ Once you have a NeRF model you can either render out a video or export a point c

### Render Video

First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and the command to generate the video.
First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and run the command to generate the video.

Other video export options are available, learn more by running,
Other video export options are available, learn more by running

```bash
ns-render --help
Expand All @@ -58,7 +58,7 @@ ns-render --help

While NeRF models are not designed to generate point clouds, it is still possible. Navigate to the "EXPORT" tab in the 3D viewer and select "POINT CLOUD". If the crop option is selected, everything in the yellow square will be exported into a point cloud. Modify the settings as desired then run the command at the bottom of the panel in your command line.

Alternatively you can use the CLI without the viewer. Learn about the export options by running,
Alternatively you can use the CLI without the viewer. Learn about the export options by running

```bash
ns-export pointcloud --help
Expand All @@ -70,7 +70,7 @@ Nerfstudio allows customization of training and eval configs from the CLI in a p

The most demonstrative and helpful example of the CLI structure is the difference in output between the following commands:

The following will list the supported models,
The following will list the supported models

```bash
ns-train --help
Expand All @@ -82,7 +82,7 @@ Applying `--help` after the model specification will provide the model and train
ns-train nerfacto --help
```

At the end of the command you can specify the dataparser used. By default we use the _nerfstudio-data_ dataparser. We include other dataparsers such as _Blender_, _NuScenes_, ect. For a list of dataparse specific arguments, add `--help` to the end of the command,
At the end of the command you can specify the dataparser used. By default we use the _nerfstudio-data_ dataparser. We include other dataparsers such as _Blender_, _NuScenes_, ect. For a list of dataparse specific arguments, add `--help` to the end of the command

```bash
ns-train nerfacto <nerfacto optional args> nerfstudio-data --help
Expand Down
6 changes: 3 additions & 3 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ CUDA must be installed on the system. This library has been tested with version

## Create environment

Nerfstudio requires `python >= 3.7`. We recommend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/en/latest/miniconda.html) before preceding.
Nerfstudio requires `python >= 3.7`. We recommend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/en/latest/miniconda.html) before proceeding.

```bash
conda create --name nerfstudio -y python=3.8
Expand Down Expand Up @@ -64,7 +64,7 @@ pip install -e .[docs]
```

## Use docker image
Instead of installing and compiling prerequisites, setting up the environment and installing dependencies, a ready to use docker image is provided. \
Instead of installing and compiling prerequisites, setting up the environment and installing dependencies, a ready to use docker image is provided.
### Prerequisites
Docker ([get docker](https://docs.docker.com/get-docker/)) and nvidia GPU drivers ([get nvidia drivers](https://www.nvidia.de/Download/index.aspx?lang=de)), capable of working with CUDA 11.7, must be installed.
The docker image can then either be pulled from [here](https://hub.docker.com/r/dromni/nerfstudio/tags) (replace <version> with the actual version, e.g. 0.1.10)
Expand Down Expand Up @@ -95,7 +95,7 @@ docker run --gpus all -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.
```
### Note
- The container works on Linux and Windows, depending on your OS some additional setup steps might be required to provide access to your GPU inside containers.
- Paths on Windows use backslash '\\' while unix based systems use a frontslash '/' for paths, where backslashes might require an escape character depending on where they are used (e.g. C:\\\\folder1\\\\folder2...). Ensure to use the correct paths when mounting folders or providing paths as parameters.
- Paths on Windows use backslash '\\\\' while unix based systems use a frontslash '/' for paths, where backslashes might require an escape character depending on where they are used (e.g. C:\\\\folder1\\\\folder2...). Ensure to use the correct paths when mounting folders or providing paths as parameters.
- Everything inside the container, what is not in a mounted folder (workspace in the above example), will be permanently removed after destroying the container. Always do all your tasks and output folder in workdir!
- The user inside the container is called 'user' and is mapped to the local user with ID 1000 (usually the first non-root user on Linux systems).
- The container currently is based on nvidia/cuda:11.7.1-devel-ubuntu22.04, consequently it comes with CUDA 11.7 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.
Expand Down
7 changes: 4 additions & 3 deletions docs/quickstart/viewer_quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,9 +63,10 @@ You will need to forward the port that the viewer is running on.
For example, if you are running the viewer on port 7007, you will need to forward that port to your local machine.
You can (without needing to modify router port settings) simply do this by opening a new terminal window and sshing into the remote machine with the following command:

```bash
ssh -L 7007:localhost:7007 <username>@<remote-machine-ip>
```
.. code-block:: bash
ssh -L 7007:localhost:7007 <username>@<remote-machine-ip>
.. admonition:: Note

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ python scripts/docs/build_docs.py
:class: info

- Rerun `make html` when documentation changes are made
- make clean` is necessary if the documentation structure changes.
- `make clean` is necessary if the documentation structure changes.
:::

### Auto build
Expand Down

0 comments on commit fb8a57d

Please sign in to comment.