Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify documentation #1003

Merged
merged 1 commit into from
Nov 22, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
143 changes: 47 additions & 96 deletions docs/quickstart/first_nerf.md
Original file line number Diff line number Diff line change
@@ -1,147 +1,98 @@
# Training your first model

## Downloading data
## Train and run viewer

Download datasets provided by nerfstudio. We support the major datasets and allow users to create their own dataset, described in detail [here](./custom_dataset.md).
The following will train a _nerfacto_ model, our recommended model for real world scenes.

```
ns-download-data --dataset=blender
```bash
# Download some test data:
ns-download-data --dataset=nerfstudio --capture=poster
# Train model
ns-train nerfacto --data data/nerfstudio/poster
```

:::{admonition} Tip
:class: info

Use `ns-download-data --help` to view all currently available datasets.
:::
If everything works, you should see training progress like the following:

The resulting script should download and unpack the dataset as follows:
<p align="center">
<img width="800" alt="image" src="https://user-images.githubusercontent.com/3310961/202766069-cadfd34f-8833-4156-88b7-ad406d688fc0.png">
</p>

```
|─ nerfstudio/
├─ data/
| ├─ blender/
| ├─ fern/
| ├─ lego/
...
|- nerfstudio/
|- poster
...
```
Navigating to the link at the end of the terminal will load the webviewer. If you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007).

## Training a model
<p align="center">
<img width="800" alt="image" src="https://user-images.githubusercontent.com/3310961/202766653-586a0daa-466b-4140-a136-6b02f2ce2c54.png">
</p>

See which models are available.
:::{admonition} Note
:class: note

```bash
ns-train --help
```

Run a vanilla nerf model.
- You may have to change the port using `--viewer.websocket-port`.
- All data configurations must go at the end. In this case, `nerfstudio-data` and all of its corresponding configurations come at the end after the model and viewer specification.
:::

```bash
ns-train vanilla-nerf
```
## Resume from checkpoint / visualize existing run

Run a nerfacto model.
It is possible to load a pretrained model by running

```bash
ns-train nerfacto
ns-train nerfacto --data data/nerfstudio/poster --trainer.load_dir {outputs/.../nerfstudio_models}
```

Run a nerfacto model with different data and port.

```
ns-train nerfacto --vis viewer --data data/nerfstudio/poster --viewer.websocket-port 7007
```
This will automatically start training. If you do not want it to train, add `--viewer.start-train False` to your training command.

Run a nerfacto model and load the latest checkpoint to resume training.
## Exporting Results

```
ns-train nerfacto --vis viewer --data data/nerfstudio/poster --trainer.load_dir outputs/data-nerfstudio-poster/nerfacto/{timestamp}/nerfstudio_models
```

:::{admonition} Warning
:class: warning

- You may have to change the ports, and be sure to forward the "websocket-port".
- All data configurations must go at the end. In this case, `nerfstudio-data` and all of its corresponding configurations come at the end after the model and viewer specification.
:::
Once you have a NeRF model you can either render out a video or export a point cloud.

## Intro to nerfstudio CLI and Configs
### Render Video

Nerfstudio allows customization of training and eval configs from the CLI in a powerful way, but there are some things to understand.
First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and the command to generate the video.

The most demonstrative and helpful example of the CLI structure is the difference in output between the following commands:
Other video export options are available, learn more by running,

```bash
ns-train -h
ns-render --help
```

```bash
ns-train nerfacto -h nerfstudio-data
```
### Generate Point Cloud

While NeRF models are not designed to generate point clouds, it is still possible. Navigate to the "EXPORT" tab in the 3D viewer and select "POINT CLOUD". If the crop option is selected, everything in the yellow square will be exported into a point cloud. Modify the settings as desired then run the command at the bottom of the panel in your command line.

Alternatively you can use the CLI without the viewer. Learn about the export options by running,

```bash
ns-train nerfacto nerfstudio-data -h
ns-export pointcloud --help
```

In each of these examples, the -h applies to the previous subcommand (`ns-train`, `nerfacto`, and `nerfstudio-data`).

In the first example, we get the help menu for the `ns-train` script.
## Intro to nerfstudio CLI and Configs

In the second example, we get the help menu for the `nerfacto` model.
Nerfstudio allows customization of training and eval configs from the CLI in a powerful way, but there are some things to understand.

In the third example, we get the help menu for the `nerfstudio-data` dataparser.
The most demonstrative and helpful example of the CLI structure is the difference in output between the following commands:

With our scripts, your arguments will apply to the preceding subcommand in your command, and thus where you put your arguments matters! Any optional arguments you discover from running
The following will list the supported models,

```bash
ns-train nerfacto -h nerfstudio-data
ns-train --help
```

need to come directly after the `nerfacto` subcommand since these optional arguments only belong to the `nerfacto` subcommand:
Applying `--help` after the model specification will provide the model and training specific arguments.

```bash
ns-train nerfacto <nerfacto optional args> nerfstudio-data
ns-train nerfacto --help
```

Each script will have some other minor quirks (like the training script dataparser subcommand needing to come after the model subcommand), read up on them [here](../reference/cli/index.md).

## Visualizing training runs

If you are using a fast NeRF variant (ie. Nerfacto/Instant-NGP), we recommend using our viewer. See our [viewer docs](viewer_quickstart.md) for a tutorial on using the viewer. The viewer will allow interactive, real-time visualization of training.

For slower methods where the viewer is not recommended, we default to [Wandb](https://wandb.ai/site) to log all training curves, test images, and other stats. We also support logging with [Tensorboard](https://www.tensorflow.org/tensorboard). We pre-specify default `--vis` options depending on the model.

:::{admonition} Attention
:class: attention

- Currently we only support using a single viewer at a time.
- To toggle between Wandb, Tensorboard, or our Web-based Viewer, you can specify `--vis VIS_OPTION`, where `VIS_OPTION` is one of {viewer,wandb,tensorboard}.
:::

#### Rendering videos

We also provide options to render out the scene of a trained model with a custom trajectory and save the output to a video.
At the end of the command you can specify the dataparser used. By default we use the _nerfstudio-data_ dataparser. We include other dataparsers such as _Blender_, _NuScenes_, ect. For a list of dataparse specific arguments, add `--help` to the end of the command,

```bash
ns-render --load-config={PATH_TO_CONFIG} --traj=spiral --output-path=renders/output.mp4
ns-train nerfacto <nerfacto optional args> nerfstudio-data --help
```

While we provide pre-specified trajectory options, `--traj={spiral, interp, filename}` we can also specify a custom trajectory if we specify `--traj=filename --camera-path-filename {PATH}`.
Each script will have some other minor quirks (like the training script dataparser subcommand needing to come after the model subcommand), read up on them [here](../reference/cli/index.md).

:::{admonition} Tip
:class: info
After running the training, the config path is logged to the terminal under "[base_config.py:263] Saving config to:"
:::
## Tensorboard / WandB

:::{admonition} See Also
:class: seealso
This quickstart allows you to preform everything in a headless manner.
We also provide a web-based viewer that allows you to easily monitor training or render out custom trajectories.
See our [viewer docs](viewer_quickstart.md) for more.
:::
We support three different methods to track training progress, using the viewer, [tensorboard](https://www.tensorflow.org/tensorboard), and [Weights and Biases](https://wandb.ai/site). You can specify which visualizer to use by appending `--vis {viewer, tensorboard, wandb}` to the training command. Note that only one may be used at a time. Additionally the viewer only works for methods that are fast (ie. nerfacto, instant-ngp), for slower methods like NeRF, use the other loggers.

## Evaluating Runs

Expand Down
1 change: 1 addition & 0 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ pip install nerfstudio
```

**From source**
Optional, use this command if you want the latest development version.

```bash
git clone [email protected]:nerfstudio-project/nerfstudio.git
Expand Down
4 changes: 0 additions & 4 deletions docs/quickstart/viewer_quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,10 +54,6 @@ Training on a local machine

You should be able to click the link obtained while running a script to open the viewer in your browser.

.. image:: imgs/viewer_link.png
:width: 800
:alt: Visualize dataset

Training on a remote machine
----------------------------

Expand Down