Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

question about three training stages #15

Open
freeman-1995 opened this issue Dec 25, 2020 · 0 comments
Open

question about three training stages #15

freeman-1995 opened this issue Dec 25, 2020 · 0 comments

Comments

@freeman-1995
Copy link

freeman-1995 commented Dec 25, 2020

I have some questions about three training stages.

  1. I find you train novel class together with base class. However, the feature of base class far greater than novel class. So it suffers a severe label imbalance during low shot training. why it still can work?
  2. During base training, I find you use 389 class out of 1000 class, however you train a classifier with 1000 output, it is so weired.
  3. When low shot training finished, I got a linear module, but its output is combine novel class with base class, so if I use imagenet1k for base training and use flowers102 for few-shot(one image per class) novel class training, so I can get a model with a 1102 classifier? It's very weird.
@freeman-1995 freeman-1995 changed the title question about three training stage question about three training stages Dec 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant