-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use light field datasets other than the testscene #20
Comments
Btw, I was also wondering is there a way to change the frame rate of the output video. |
To change the framerate output from the CUDA renderer, change both instances of the number "30" to whatever fps you want here (this is the ffmpeg command that creates the video) and recompile. Unfortunately we are currently dependent on the success of COLMAP to run our algorithm since it requires camera poses, see my comments here. In this case of the MIT light field data, I'm guessing the baseline might be too small for COLMAP to handle. However, the exact poses should be available in the README.txt file included with the images. But... I think these renderings might not be using a typical pinhole projection model because the principal point may not be centered -- I think they might be imaging a fixed rectangular "focal" window but from different points on a second "aperture" plane. If true, they do not act like images from a physical camera and would break the assumption in our code of a centered principal point... |
OK. Thanks a lot! I will use images from physical camera instead. |
Hey! I am a student Artificial Intelligence and I am currently working on a project using NeRF, together with a team of other students. We are in possession of a dataset that is sent to us by a company, and they want us to use NeRF on that dataset. The dataset consist of pictures like the one that is attached, they are taken in a professional environment of the company and with a professional camera. The attached foto is compressed, but the whole object is in sight in all images. The output is: And the colmap output: What could have gone wrong? Can we fix this in another way? |
i am also facing that issue |
Hi, I have finished the installation and rendered the testscene successfully. However, when I tried to use pictures from other datasets, I just failed. The dataset I use is the MIT Synthetic Light Field Archive.
I checked the log and I found the first error occured here:
The scene I use contains 25 pictures, but only 2 pictures(the initial pair) has been registered successfully after running COLMAP. I think this is the main reason for the failure. I was wondering why that happens. Also, I checked the colmap output. One of the differences is that the pictures I use do not contain GPS information. I attach the colmap_output here.
car_colmap_output.txt
The text was updated successfully, but these errors were encountered: