Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch AMP for DINOv2 precision issue #71

Closed
lnexenl opened this issue Aug 8, 2024 · 2 comments
Closed

torch AMP for DINOv2 precision issue #71

lnexenl opened this issue Aug 8, 2024 · 2 comments

Comments

@lnexenl
Copy link
Contributor

lnexenl commented Aug 8, 2024

torch.amp is commonly used in the model and it obviously decreases the VRAM usage and increases speed. The DINOv2 weight is frozen in RoMa, and the output of float16 version DINOv2 might be slightly different from float32 version. I am wondering that have you ever tried to train DINOv2 together?

@Parskatt
Copy link
Owner

Parskatt commented Aug 8, 2024

I have not. Probably lora would be the easiest way to go. Roma is already quite expensive to train so didnt look into this. You could also try just unfreezing soke layers near the end.

@lnexenl
Copy link
Contributor Author

lnexenl commented Aug 8, 2024

I also think it costs lots of GPUs to train, but I doesn't have cards with more than 32GB VRAM, so I decide to keep it original. 😆

@lnexenl lnexenl closed this as completed Aug 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants