Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory issue when using with pretrained weight from COOP #9

Open
SeonghaEom opened this issue Jun 13, 2023 · 0 comments
Open

Comments

@SeonghaEom
Copy link

SeonghaEom commented Jun 13, 2023

Hi, I was reproducing TPT with loading pretrained weight from COOP .

I realized the current code directly loads pretrained weight which is mapped at gpu index:0.
This causes the current code to map at global gpu index 0 which is not what I want.

I think the loading pretrained context should be mapped to 'cpu' and then copy the weight from there.

This will save some memory that was holding up pretrained weight.

@SeonghaEom SeonghaEom changed the title CUDA device error when using with pretrained weight from COOP CUDA out of memory issue when using with pretrained weight from COOP Jun 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant