Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what is the learning rate should be when finetune a small dataset on a base model #37

Open
Liujingxiu23 opened this issue Jan 26, 2021 · 0 comments

Comments

@Liujingxiu23
Copy link

Liujingxiu23 commented Jan 26, 2021

I have trained a base model using 4 female datasets using the followding lr.
optD = torch.optim.Adam(netD.parameters(), lr=1e-4, betas=(0.5, 0.9))

Then I want to finetune the model using a new small female dataset, should I change the lr.
Maybe lr=0.00001 without decay?
Does anyone have done the experiements and have any conclusion?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant