-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU memory #5
Comments
also when running the command, it seems like the model weights are not correctly loaded, I get |
Hi,
Thank you for your amazing work!
I am trying to replicate your results and training using
python translation.py --base configs/translation/sbert-to-biggan256.yaml -t --gpus 0,
I was wondering what gpu was used to train your model and what batch size did you use? I am only able to fit batch_size=2 on a TITAN XP, the default batch_size in the config was 16 but I am not able to launch it using 4 TITANs XP without running into memory issues. Is the BigGan or Sentence Transformer fine-tuned during the training (from your paper it seems like it was not), do you have any insight on what am I missing?
Thank you in advance
The text was updated successfully, but these errors were encountered: