You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I have some questions about three training stages.
I find you train novel class together with base class. However, the feature of base class far greater than novel class. So it suffers a severe label imbalance during low shot training. why it still can work?
During base training, I find you use 389 class out of 1000 class, however you train a classifier with 1000 output, it is so weired.
When low shot training finished, I got a linear module, but its output is combine novel class with base class, so if I use imagenet1k for base training and use flowers102 for few-shot(one image per class) novel class training, so I can get a model with a 1102 classifier? It's very weird.
The text was updated successfully, but these errors were encountered:
freeman-1995
changed the title
question about three training stage
question about three training stages
Dec 25, 2020
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have some questions about three training stages.
The text was updated successfully, but these errors were encountered: