Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"hloc" is hard to setup in docker image dromni/nerfstudio:0.1.17 #1412

Closed
edwardchenomni opened this issue Feb 13, 2023 · 9 comments
Closed

Comments

@edwardchenomni
Copy link

edwardchenomni commented Feb 13, 2023

Describe the bug
While the instruction to manually install hloc for using --sfm-tool hloc in ns-process-data command is clearly shown.
It hard to achieve the manual installation process when using "docker image dromni/nerfstudio:0.1.17".

To Reproduce

  1. run the container docker run --gpus all --rm -it dromni/nerfstudio:0.1.17
  2. follow the installation instruction on Hierarchical-Localization
    git clone --recursive https://github.com/cvg/Hierarchical-Localization/
    cd Hierarchical-Localization/
  3. install hloc python3 -m pip install -e .

Expected behavior
Instead of smoothly installed, an error popped up complaining about:
ERROR: Could not find a version that satisfies the requirement pycolmap>=0.3.0 (from hloc) (from versions: none)
ERROR: No matching distribution found for pycolmap>=0.3.0

Screenshots
image

It seems like there no pycolmap candidate for python 3.10?
Is there any easy fix on this?

Best,
Edward Chen

@pablovela5620
Copy link
Contributor

Looks like they just recently added support for 3.10 here. I would say that it may be worth building from source in the docker file since it looks like for GPU support that's needed.

image

@Zunhammer
Copy link
Contributor

Regarding the webpage, it is not yet available for Python 3.10 and I would avoid to change the python version in the whole docker image. As pablovela5620 mentioned, probably the best way would be to build from source into the image. If you need help doing this feel free to contact me directly (NicZer0 on nerfstudio discord) or if there is multiple people who need this I could also add it to the dromni/nerfstudio image.

@Zunhammer
Copy link
Contributor

@edwardchenomni I addd hloc to the 0.1.18 image (dromni/nerfstudio:0.1.18). For me it seems to work fine, I would appreciate if you could test and verify that it is working and close the issue if so.
Best,
Nico

@dignakov
Copy link

Not the OP, but I was trying to get hloc to work as well when I came across this issue.

If you'd prefer I post this in a separate issue please let me know and I'll move it. I put this here since I was trying out the Docker container.

I seem to be encountering a problem with when I run ns-process-data with this command:

$ ns-process-data video --num-frames-target 10 --sfm-tool hloc --data ./input/video.mp4 --output-dir ./ht1

I get an error:

File "/home/user/nerfstudio/nerfstudio/process_data/hloc_utils.py", line 99, in run_hloc
    reconstruction.main(
TypeError: main() got an unexpected keyword argument 'image_options'

Full terminal output:

$ ns-process-data video --num-frames-target 10 --sfm-tool hloc --data ./input/video.mp4 --output-dir ./ht1
Number of frames in video: 1981
[04:09:43] 🎉 Done converting video to images.                                                 process_data_utils.py:146
[04:09:46] 🎉 Done downscaling images.                                                         process_data_utils.py:303
[2023/02/22 04:09:46 hloc INFO] Extracting local features with configuration:
{'model': {'max_keypoints': 4096, 'name': 'superpoint', 'nms_radius': 3},
 'output': 'feats-superpoint-n4096-r1024',
 'preprocessing': {'grayscale': True, 'resize_max': 1024}}
[2023/02/22 04:09:46 hloc INFO] Skipping the extraction.
[2023/02/22 04:09:46 hloc INFO] Extracting local features with configuration:
{'model': {'name': 'netvlad'},
 'output': 'global-feats-netvlad',
 'preprocessing': {'resize_max': 1024}}
[2023/02/22 04:09:46 hloc INFO] Found 11 images in root ht1/images.
[2023/02/22 04:09:46 hloc INFO] Skipping the extraction.
[2023/02/22 04:09:46 hloc INFO] Extracting image pairs from a retrieval database.
[2023/02/22 04:09:47 hloc INFO] Found 110 pairs.
[2023/02/22 04:09:47 hloc INFO] Matching local features with configuration:
{'model': {'name': 'superglue',
           'sinkhorn_iterations': 50,
           'weights': 'outdoor'},
 'output': 'matches-superglue'}
Loaded SuperGlue model ("outdoor" weights)
100%|██████| 110/110 [00:00<00:00, 1195267.98it/s]
[2023/02/22 04:09:48 hloc INFO] Finished exporting matches.
Traceback (most recent call last):
  File "/home/user/.local/bin/ns-process-data", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/user/nerfstudio/scripts/process_data.py", line 835, in entrypoint
    tyro.cli(Commands).main()
  File "/home/user/nerfstudio/scripts/process_data.py", line 306, in main
    hloc_utils.run_hloc(
  File "/home/user/nerfstudio/nerfstudio/process_data/hloc_utils.py", line 99, in run_hloc
    reconstruction.main(
TypeError: main() got an unexpected keyword argument 'image_options'

@Zunhammer
Copy link
Contributor

I checked this one, and the issue is that the "image_options" parameter is not yet implemented in hloc rlease v1.3 (which is the most recent and the one I used inside the docker image). So, it is a docker related issue :)
I just fixed it in dromni/nerfstudio:0.1.18, @dignakov please pull this one again to update and try hloc one more time and give feedback if it is solved. Thanks.

Just for completion: The solution is quite easy, we need to use the main/master branch of hloc instead. Usually, I prefer to stick to releases to have a defined environment, however in this case it seems the authors do not create releases frequently, the last one is from January 2022. The docker image is noe on the master branch of hloc.

@edwardchenomni
Copy link
Author

@Zunhammer thank you for the effort!
As a work around back then, I just altered the Dockerfile to use python 3.8 and pip install colmap (version 3.4). And that just worked fine. So maybe I'll test 0.1.18 later.

@Zunhammer
Copy link
Contributor

If everything is working in 0.1.18 I'll also add it to the Dockerfile afterwards. Would really appreciate your feedback on testing :)
Thanks

@dignakov
Copy link

Had a chance to test. It works well, with one quirk . . .

It looks like you need to pass either --ipc=host or increase the shared memory for the docker container with --shm-size. Maybe this is something that could be useful in the docs?

Running the container with --ipc=host works great.

Otherwise it throws an error:

ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).

(PyTorch docs: https://github.com/pytorch/pytorch#docker-image)

@Zunhammer
Copy link
Contributor

Great, thanks for the feedback and glad that it works.

Personally, I did not experience those issues but I see it makes sense to give a hint to either ipc or shm-size. I'll do so when I add hloc to the Dockerfile in the next PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants