You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thanks for developing a fascinating algorithm!
I have just attempted running it on an Nvidia RTX 4070 with 12 GB VRAM, and get a CUDA out of memory error when running bash script/fit_people_30s.sh (it attempts to allocate ~13GB). I now wonder if there are any simple ways of reducing the VRAM usage during the fitting of the model — for example, which parameters in profiles/people/people_30s.yaml have the largest effect on memory?
Additionally, what would be the preferred way of generating a textured mesh from the learned avatar?
Thanks,
Filip
The text was updated successfully, but these errors were encountered:
Hi, thanks for developing a fascinating algorithm!
I have just attempted running it on an Nvidia RTX 4070 with 12 GB VRAM, and get a CUDA out of memory error when running
bash script/fit_people_30s.sh
(it attempts to allocate ~13GB). I now wonder if there are any simple ways of reducing the VRAM usage during the fitting of the model — for example, which parameters inprofiles/people/people_30s.yaml
have the largest effect on memory?Additionally, what would be the preferred way of generating a textured mesh from the learned avatar?
Thanks,
Filip
The text was updated successfully, but these errors were encountered: