-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda: Out of memory issue on rtx3090 (24GB vram) #3
Comments
oh I think it's running with global packages, even I've followed all the steps in the readme page... |
So how about using 8bit adam, xformers? You can reduce vram usage. |
oh as the setting above, I enabled 8bit anam and xformers. I finally managed to run v2 768 succesfully. I ran windows powershell with administrator then I can run it with 1 batch size. |
Merge requirements
Hi,
I succesfully managed to run your repo based on SD1.5 model.
now I'm trying to run SD2.0 768 model but I have CUDA: out of memory error.
I have 23 train images (768*768) in 20_person folder under train_person folder.
I tried lowering batch size and disabling cache latents to 0
here's the setting that I put into the powershell with venv(virtual environment)
and here's the error message I got
The text was updated successfully, but these errors were encountered: