You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
torch.amp is commonly used in the model and it obviously decreases the VRAM usage and increases speed. The DINOv2 weight is frozen in RoMa, and the output of float16 version DINOv2 might be slightly different from float32 version. I am wondering that have you ever tried to train DINOv2 together?
The text was updated successfully, but these errors were encountered:
I have not. Probably lora would be the easiest way to go. Roma is already quite expensive to train so didnt look into this. You could also try just unfreezing soke layers near the end.
torch.amp is commonly used in the model and it obviously decreases the VRAM usage and increases speed. The DINOv2 weight is frozen in RoMa, and the output of float16 version DINOv2 might be slightly different from float32 version. I am wondering that have you ever tried to train DINOv2 together?
The text was updated successfully, but these errors were encountered: