Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VRAM usage and mesh generation #11

Open
FilipAnjou opened this issue Feb 7, 2024 · 2 comments
Open

VRAM usage and mesh generation #11

FilipAnjou opened this issue Feb 7, 2024 · 2 comments

Comments

@FilipAnjou
Copy link

Hi, thanks for developing a fascinating algorithm!

I have just attempted running it on an Nvidia RTX 4070 with 12 GB VRAM, and get a CUDA out of memory error when running bash script/fit_people_30s.sh (it attempts to allocate ~13GB). I now wonder if there are any simple ways of reducing the VRAM usage during the fitting of the model — for example, which parameters in profiles/people/people_30s.yaml have the largest effect on memory?

Additionally, what would be the preferred way of generating a textured mesh from the learned avatar?

Thanks,
Filip

@yhd-ai
Copy link

yhd-ai commented Apr 27, 2024

Similar question about generating meshes

@fantasy-fish
Copy link

fantasy-fish commented Aug 26, 2024

Same question about generating the mesh(smplx model)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants