Skip to content

Commit

Permalink
Squashed commit of the following:
Browse files Browse the repository at this point in the history
commit 9808ebb
Author: Matthew Tancik <[email protected]>
Date:   Sun Feb 12 11:14:12 2023 -0800

    v0.1.17 (nerfstudio-project#1409)

commit 2b52f51
Author: Matthew Tancik <[email protected]>
Date:   Sun Feb 12 11:05:59 2023 -0800

    Downgrade nerfacc to 0.3.3 (nerfstudio-project#1408)

    reset nerfacc to 0.3.3

commit a37c73f
Author: Pablo Vela <[email protected]>
Date:   Sun Feb 12 12:31:31 2023 -0600

    sdfstudio dataparser/dataset (nerfstudio-project#1381)

    * add downloads for sdfstudio datasets

    * add sdfstudio parser to work with sdfstudio data

    * fix licensing and doc strings

    * fix linter errors

    * move depths/normals to metadata, create sdfdataset

    * remove extra scenebox parameters

    * more linting errors fix

    * fix wrong annotation

    * add in missing depth/normal files

    * minor fixes

    ---------

commit 71eacd5
Author: Matthew Tancik <[email protected]>
Date:   Sat Feb 11 10:51:22 2023 -0800

    Update reality capture doc (nerfstudio-project#1404)

commit 259f3aa
Author: Mark Colley <[email protected]>
Date:   Fri Feb 10 18:30:57 2023 -0500

    Fix IPython error display (nerfstudio-project#1397)

commit 3cf2bb0
Author: Matthew Tancik <[email protected]>
Date:   Thu Feb 9 18:26:13 2023 -0800

    Update citation (nerfstudio-project#1392)

commit 448e107
Author: Matthew Tancik <[email protected]>
Date:   Thu Feb 9 17:55:51 2023 -0800

    Update doc link

commit 6368340
Author: Jonáš Kulhánek <[email protected]>
Date:   Fri Feb 10 02:31:16 2023 +0100

    Add ns-viewer command to run viewer only with model in eval mode (nerfstudio-project#1379)

    * Add ns-viewer command to run only viewer with model in eval mode

    * Remove profiler and logger

    * Fix ns-viewer

    * ns-viewer remove writer.setup_local_writer

    * ns-viewer add writer.setup_local_writer

    * ns-viewer: update docs

    * Delete outputs

commit 27b7132
Author: Jonáš Kulhánek <[email protected]>
Date:   Fri Feb 10 02:23:33 2023 +0100

    Add support for is_fisheye flag in ingp transforms.json (nerfstudio-project#1385)

commit b653eef
Author: LSong <[email protected]>
Date:   Thu Feb 9 16:43:21 2023 -0800

    add time (nerfstudio-project#1391)

    * add time

    * fix black

    ---------

    Co-authored-by: LcDog <[email protected]>
  • Loading branch information
machenmusik committed Feb 13, 2023
1 parent a12faff commit 8c68a47
Show file tree
Hide file tree
Showing 18 changed files with 491 additions and 33 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -293,13 +293,13 @@ We provide the following support structures to make life easier for getting star
If you use this library or find the documentation useful for your research, please consider citing:

```
@misc{nerfstudio,
title={Nerfstudio: A Framework for Neural Radiance Field Development},
author={Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi,
Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi,
Abhik Ahuja, David McAllister, Angjoo Kanazawa},
year={2022},
url={https://github.com/nerfstudio-project/nerfstudio},
@article{nerfstudio,
author = {Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi,
Brent and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin,
Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa, Angjoo},
title = {Nerfstudio: A Modular Framework for Neural Radiance Field Development},
journal = {arXiv preprint arXiv:2302.04264},
year = {2023},
}
```

Expand Down
8 changes: 4 additions & 4 deletions colab/demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -292,9 +292,9 @@
"if os.path.exists(f\"data/nerfstudio/{scene}/transforms.json\"):\n",
" !ns-train nerfacto --viewer.websocket-port 7007 nerfstudio-data --data data/nerfstudio/$scene --downscale-factor 4\n",
"else:\n",
" import IPython\n",
" display(IPython.display.HTML('<h3 style=\"color:red\">Error: Data processing did not complete</h3>'))\n",
" display(IPython.display.HTML('<h3>Please re-run `Downloading and Processing Data`, or view the FAQ for more info.</h3>'))"
" from IPython.core.display import display, HTML\n",
" display(HTML('<h3 style=\"color:red\">Error: Data processing did not complete</h3>'))\n",
" display(HTML('<h3>Please re-run `Downloading and Processing Data`, or view the FAQ for more info.</h3>'))"
]
},
{
Expand Down Expand Up @@ -383,4 +383,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}
2 changes: 1 addition & 1 deletion docs/developer_guides/pipelines/datamanagers.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def next_train(self, step: int) -> Tuple[RayBundle, Dict]:

You can see our code for more details.

```{button-link} https://github.com/nerfstudio-project/nerfstudio/blob/master/nerfstudio/data/datamanagers.py
```{button-link} https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/data/datamanagers/base_datamanager.py
:color: primary
:outline:
See the code!
Expand Down
14 changes: 7 additions & 7 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,13 +158,13 @@ We'll be constantly growing this list! So make sure to check back in to see our
If you use this library or find the documentation useful for your research, please consider citing:

```none
@misc{nerfstudio,
title={Nerfstudio: A Framework for Neural Radiance Field Development},
author={Matthew Tancik*, Ethan Weber*, Evonne Ng*, Ruilong Li, Brent Yi,
Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi,
Abhik Ahuja, David McAllister, Angjoo Kanazawa},
year={2022},
url={https://github.com/nerfstudio-project/nerfstudio},
@article{nerfstudio,
author = {Tancik, Matthew and Weber, Ethan and Ng, Evonne and Li, Ruilong and Yi,
Brent and Kerr, Justin and Wang, Terrance and Kristoffersen, Alexander and Austin,
Jake and Salahi, Kamyar and Ahuja, Abhik and McAllister, David and Kanazawa, Angjoo},
title = {Nerfstudio: A Modular Framework for Neural Radiance Field Development},
journal = {arXiv preprint arXiv:2302.04264},
year = {2023},
}
```

Expand Down
10 changes: 9 additions & 1 deletion docs/quickstart/first_nerf.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Navigating to the link at the end of the terminal will load the webviewer. If yo
- All data configurations must go at the end. In this case, `nerfstudio-data` and all of its corresponding configurations come at the end after the model and viewer specification.
:::

## Resume from checkpoint / visualize existing run
## Resume from checkpoint

It is possible to load a pretrained model by running

Expand All @@ -40,6 +40,14 @@ ns-train nerfacto --data data/nerfstudio/poster --load-dir {outputs/.../nerfstud

This will automatically start training. If you do not want it to train, add `--viewer.start-train False` to your training command.

## Visualize existing run

Given a pretrained model checkpoint, you can start the viewer by running

```bash
ns-viewer --load-config {outputs/.../config.yml}
```

## Exporting Results

Once you have a NeRF model you can either render out a video or export a point cloud.
Expand Down
4 changes: 4 additions & 0 deletions nerfstudio/cameras/camera_paths.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,12 +100,16 @@ def get_spiral_path(
new_c2ws.append(c2wh[:3, :4])
new_c2ws = torch.stack(new_c2ws, dim=0)

times = None
if camera.times is not None:
times = torch.linspace(0, 1, steps)[:, None]
return Cameras(
fx=camera.fx[0],
fy=camera.fy[0],
cx=camera.cx[0],
cy=camera.cy[0],
camera_to_worlds=new_c2ws,
times=times,
)


Expand Down
4 changes: 2 additions & 2 deletions nerfstudio/cameras/cameras.py
Original file line number Diff line number Diff line change
Expand Up @@ -230,15 +230,15 @@ def _init_get_height_width(
c_x_y: cx or cy for when h_w == None
"""
if isinstance(h_w, int):
h_w = torch.Tensor([h_w]).to(torch.int64).to(self.device)
h_w = torch.as_tensor([h_w]).to(torch.int64).to(self.device)
elif isinstance(h_w, torch.Tensor):
assert not torch.is_floating_point(h_w), f"height and width tensor must be of type int, not: {h_w.dtype}"
h_w = h_w.to(torch.int64).to(self.device)
if h_w.ndim == 0 or h_w.shape[-1] != 1:
h_w = h_w.unsqueeze(-1)
# assert torch.all(h_w == h_w.view(-1)[0]), "Batched cameras of different h, w will be allowed in the future."
elif h_w is None:
h_w = torch.Tensor((c_x_y * 2).to(torch.int64).to(self.device))
h_w = torch.as_tensor((c_x_y * 2)).to(torch.int64).to(self.device)
else:
raise ValueError("Height must be an int, tensor, or None, received: " + str(type(h_w)))
return h_w
Expand Down
2 changes: 2 additions & 0 deletions nerfstudio/data/datamanagers/base_datamanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@
from nerfstudio.data.dataparsers.phototourism_dataparser import (
PhototourismDataParserConfig,
)
from nerfstudio.data.dataparsers.sdfstudio_dataparser import SDFStudioDataParserConfig
from nerfstudio.data.datasets.base_dataset import InputDataset
from nerfstudio.data.pixel_samplers import EquirectangularPixelSampler, PixelSampler
from nerfstudio.data.utils.dataloaders import (
Expand All @@ -75,6 +76,7 @@
"dnerf-data": DNeRFDataParserConfig(),
"phototourism-data": PhototourismDataParserConfig(),
"dycheck-data": DycheckDataParserConfig(),
"sdfstudio-data": SDFStudioDataParserConfig(),
},
prefix_names=False, # Omit prefixes in subcommands themselves.
)
Expand Down
6 changes: 5 additions & 1 deletion nerfstudio/data/dataparsers/instant_ngp_dataparser.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,10 @@ def _generate_dataparser_outputs(self, split="train"):

w, h = meta["w"], meta["h"]

camera_type = CameraType.PERSPECTIVE
if meta.get("is_fisheye", False):
camera_type = CameraType.FISHEYE

cameras = Cameras(
fx=float(fl_x),
fy=float(fl_y),
Expand All @@ -132,7 +136,7 @@ def _generate_dataparser_outputs(self, split="train"):
height=int(h),
width=int(w),
camera_to_worlds=camera_to_world,
camera_type=CameraType.PERSPECTIVE,
camera_type=camera_type,
)

# TODO(ethan): add alpha background color
Expand Down
158 changes: 158 additions & 0 deletions nerfstudio/data/dataparsers/sdfstudio_dataparser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Copyright 2022 The Nerfstudio Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Datapaser for sdfstudio formatted data"""

from __future__ import annotations

from dataclasses import dataclass, field
from pathlib import Path
from typing import Type

import torch
from rich.console import Console

from nerfstudio.cameras import camera_utils
from nerfstudio.cameras.cameras import Cameras, CameraType
from nerfstudio.data.dataparsers.base_dataparser import (
DataParser,
DataParserConfig,
DataparserOutputs,
)
from nerfstudio.data.scene_box import SceneBox
from nerfstudio.utils.io import load_from_json

CONSOLE = Console()


@dataclass
class SDFStudioDataParserConfig(DataParserConfig):
"""Scene dataset parser config"""

_target: Type = field(default_factory=lambda: SDFStudio)
"""target class to instantiate"""
data: Path = Path("data/DTU/scan65")
"""Directory specifying location of data."""
include_mono_prior: bool = False
"""whether or not to load monocular depth and normal """
include_foreground_mask: bool = False
"""whether or not to load foreground mask"""
downscale_factor: int = 1
scene_scale: float = 2.0
"""
Sets the bounding cube to have edge length of this size.
The longest dimension of the Friends axis-aligned bbox will be scaled to this value.
"""
skip_every_for_val_split: int = 1
"""sub sampling validation images"""
auto_orient: bool = False


@dataclass
class SDFStudio(DataParser):
"""SDFStudio Dataset"""

config: SDFStudioDataParserConfig

def _generate_dataparser_outputs(self, split="train"): # pylint: disable=unused-argument,too-many-statements
# load meta data
meta = load_from_json(self.config.data / "meta_data.json")

indices = list(range(len(meta["frames"])))
# subsample to avoid out-of-memory for validation set
if split != "train" and self.config.skip_every_for_val_split >= 1:
indices = indices[:: self.config.skip_every_for_val_split]

image_filenames = []
depth_filenames = []
normal_filenames = []
transform = None
fx = []
fy = []
cx = []
cy = []
camera_to_worlds = []
for i, frame in enumerate(meta["frames"]):
if i not in indices:
continue

image_filename = self.config.data / frame["rgb_path"]
depth_filename = self.config.data / frame["mono_depth_path"]
normal_filename = self.config.data / frame["mono_normal_path"]

intrinsics = torch.tensor(frame["intrinsics"])
camtoworld = torch.tensor(frame["camtoworld"])

# append data
image_filenames.append(image_filename)
depth_filenames.append(depth_filename)
normal_filenames.append(normal_filename)
fx.append(intrinsics[0, 0])
fy.append(intrinsics[1, 1])
cx.append(intrinsics[0, 2])
cy.append(intrinsics[1, 2])
camera_to_worlds.append(camtoworld)

fx = torch.stack(fx)
fy = torch.stack(fy)
cx = torch.stack(cx)
cy = torch.stack(cy)
camera_to_worlds = torch.stack(camera_to_worlds)

# Convert from COLMAP's/OPENCV's camera coordinate system to nerfstudio
camera_to_worlds[:, 0:3, 1:3] *= -1

if self.config.auto_orient:
camera_to_worlds, transform = camera_utils.auto_orient_and_center_poses(
camera_to_worlds,
method="up",
center_poses=False,
)

# scene box from meta data
meta_scene_box = meta["scene_box"]
aabb = torch.tensor(meta_scene_box["aabb"], dtype=torch.float32)
scene_box = SceneBox(
aabb=aabb,
)

height, width = meta["height"], meta["width"]
cameras = Cameras(
fx=fx,
fy=fy,
cx=cx,
cy=cy,
height=height,
width=width,
camera_to_worlds=camera_to_worlds[:, :3, :4],
camera_type=CameraType.PERSPECTIVE,
)

# TODO supports downsample
# cameras.rescale_output_resolution(scaling_factor=1.0 / self.config.downscale_factor)

assert meta["has_mono_prior"] == self.config.include_mono_prior, f"no mono prior in {self.config.data}"

dataparser_outputs = DataparserOutputs(
image_filenames=image_filenames,
cameras=cameras,
scene_box=scene_box,
metadata={
"depth_filenames": depth_filenames if len(depth_filenames) > 0 else None,
"normal_filenames": normal_filenames if len(normal_filenames) > 0 else None,
"transform": transform,
"camera_to_worlds": camera_to_worlds if len(camera_to_worlds) > 0 else None,
"include_mono_prior": self.config.include_mono_prior,
},
)
return dataparser_outputs
Loading

0 comments on commit 8c68a47

Please sign in to comment.