Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to Train Model on RTX 4090 Due to Out of Memory Issue #48

Open
ZHT150798 opened this issue Nov 13, 2024 · 1 comment
Open

Unable to Train Model on RTX 4090 Due to Out of Memory Issue #48

ZHT150798 opened this issue Nov 13, 2024 · 1 comment

Comments

@ZHT150798
Copy link

Hi, thank you for sharing this amazing model! I’m facing an issue with training the model on a single RTX 4090 GPU. Even with the batch size set to 1, the 24GB VRAM is insufficient to train the model, and I’m encountering CUDA memory errors. Are there any suggestions or optimizations you recommend to reduce memory usage or make the model trainable on a single 4090 GPU?

@LinPeiMing
Copy link

插眼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants