Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to use light field datasets other than the testscene #20

Closed
CriusT opened this issue Nov 4, 2019 · 5 comments
Closed

Unable to use light field datasets other than the testscene #20

CriusT opened this issue Nov 4, 2019 · 5 comments

Comments

@CriusT
Copy link

CriusT commented Nov 4, 2019

Hi, I have finished the installation and rendered the testscene successfully. However, when I tried to use pictures from other datasets, I just failed. The dataset I use is the MIT Synthetic Light Field Archive.
I checked the log and I found the first error occured here:

Need to run COLMAP
Features extracted
Features matched
Sparse map created
Finished running COLMAP, see data/carscene/output_5x5m/colmap_output.txt for logs
Post-colmap
('Cameras', 5)
('Images #', 2)
Traceback (most recent call last):
  File "imgs2poses.py", line 11, in <module>
    gen_poses(args.scenedir)
  File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 273, in gen_poses
    save_poses(basedir, poses, pts3d, perm)
  File "/host/data2/l00362246/boyutian/LLFF/llff/poses/pose_utils.py", line 63, in save_poses
    cams[ind-1] = 1
IndexError: list assignment index out of range

The scene I use contains 25 pictures, but only 2 pictures(the initial pair) has been registered successfully after running COLMAP. I think this is the main reason for the failure. I was wondering why that happens. Also, I checked the colmap output. One of the differences is that the pictures I use do not contain GPS information. I attach the colmap_output here.
car_colmap_output.txt

@CriusT
Copy link
Author

CriusT commented Nov 4, 2019

Btw, I was also wondering is there a way to change the frame rate of the output video.

@bmild
Copy link
Collaborator

bmild commented Nov 4, 2019

To change the framerate output from the CUDA renderer, change both instances of the number "30" to whatever fps you want here (this is the ffmpeg command that creates the video) and recompile.

Unfortunately we are currently dependent on the success of COLMAP to run our algorithm since it requires camera poses, see my comments here.

In this case of the MIT light field data, I'm guessing the baseline might be too small for COLMAP to handle. However, the exact poses should be available in the README.txt file included with the images. But... I think these renderings might not be using a typical pinhole projection model because the principal point may not be centered -- I think they might be imaging a fixed rectangular "focal" window but from different points on a second "aperture" plane. If true, they do not act like images from a physical camera and would break the assumption in our code of a centered principal point...

@CriusT
Copy link
Author

CriusT commented Nov 5, 2019

OK. Thanks a lot! I will use images from physical camera instead.

@CriusT CriusT closed this as completed Nov 5, 2019
@dilaratank
Copy link

dilaratank commented Jun 10, 2020

Hey!

I am a student Artificial Intelligence and I am currently working on a project using NeRF, together with a team of other students. We are in possession of a dataset that is sent to us by a company, and they want us to use NeRF on that dataset.

The dataset consist of pictures like the one that is attached, they are taken in a professional environment of the company and with a professional camera. The attached foto is compressed, but the whole object is in sight in all images.

The output is:
Features extracted
Features matched
Sparse map created
Finished running COLMAP, see /content/drive/My Drive/nerf/kan_new/colmap_output.txt for logs
Post-colmap
Traceback (most recent call last):
File "imgs2poses.py", line 18, in
gen_poses(args.scenedir, args.match_type)
File "/content/LLFF/llff/poses/pose_utils.py", line 274, in gen_poses
poses, pts3d, perm = load_colmap_data(basedir)
File "/content/LLFF/llff/poses/pose_utils.py", line 14, in load_colmap_data
camdata = read_model.read_cameras_binary(camerasfile)
File "/content/LLFF/llff/poses/colmap_read_model.py", line 115, in read_cameras_binary
with open(path_to_model_file, "rb") as fid:
FileNotFoundError: [Errno 2] No such file or directory: '/content/drive/My Drive/nerf/kan_new/sparse/0/cameras.bin'

Webp net-compress-image

And the colmap output:
colmap_output(1).txt

What could have gone wrong? Can we fix this in another way?

@ibtehajali67
Copy link

i am also facing that issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants