-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volumetric sampling bug/problems in Instant-NGP implemtation #1099
Comments
Reconstruction:
where I scaled the transforms file by 0.3333 (ngp scale) and used nerstudio loader and change the contraction type to ContractionType.AABB: |
I think the crux of the issue is that instant-ngp and nerfacto have been optimized to work with real-world data, not the blender data. (ie you can see a discussion in #806). The best solution is probably to have multiple models specific to different usecases, ie a "blender" version of instant-ngp and nerfacto (it would use a different sampling strategy, no pose opt, no appearance opt, no contraction...). On our end the question is how to prioritize improving real-world datasets for applications vs datasets like blender for research benchmarks. |
I would like to split this discussion into two parts: 1. a. would you consider changing instant nerf default contraction method to AABB method ? ( asking because this is the contraction method of the original instant-nerf) 2. I agree that if the scene is unbounded then instant ngp will preform poorly. However if the scene is actually bounded then there are certain real-world scenarios where instant ngp method looks better than nerfacto. For example - arial mapping drones : nerfacto : This scene is very fogy - even on "untrained areas" instant-ngp: (after the fixes I mentioned) also - it has better test PSNR. It would be great if nerfacto can match this but I think that the AABB contraction has advantages on bounded real-world scenes. |
I think the way to go about this is to create a new config, "instant-ngp-bounded" that contains these changes so that people can choose depending on the data they are working with. |
Hi @tancik ,
tl;dr - there is a problem in the volumetric sampler resulting in a memory crash.
The cause of this issue is the initial sigma_fn method in the ray marching.
I have been experimenting for a while in different configurations and methods in nerfstudio and stumbled upon weird issue.
First of all I am using instant-ngp (with AABB contraction) implementation since I can compare it visually to the original instant-ngp.
I am using blender hotdog data set with aabb 1. (on nerfstudio - scene_scale 0.5)
nerfstudio seem to crash on GPU memory because of this line
nerfstudio/nerfstudio/pipelines/dynamic_batch.py
Line 74 in 6baf31d
num_samples_per_batch - was very small.
the reason num_samples_per_batch was very small is because on the first iteration of the train the volumetric sampler
rejected the samples because of sigma_fn ( the samples had zero density so it rejected all of them)
nerfstudio/nerfstudio/model_components/ray_samplers.py
Line 458 in e605b33
I had 2 workarounds -
workaround 1 work best for me (uniform sampler?) with better PSNR.
BTW - nerfacto model looked very bad on the blender hotdog data set. ( but this is a different issue)
The text was updated successfully, but these errors were encountered: