-
Notifications
You must be signed in to change notification settings - Fork 212
Fix backbone freezing with icevision models #1163
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1163 +/- ##
==========================================
+ Coverage 89.24% 89.26% +0.01%
==========================================
Files 286 286
Lines 13048 13048
==========================================
+ Hits 11645 11647 +2
+ Misses 1403 1401 -2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@@ -49,7 +49,7 @@ def __init__( | |||
pretrained: bool = True, | |||
optimizer: OPTIMIZER_TYPE = "Adam", | |||
lr_scheduler: LR_SCHEDULER_TYPE = None, | |||
learning_rate: float = 5e-3, | |||
learning_rate: float = 1e-2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ethanwharris Can you elaborate why the default learning rate has been changed here? Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, @ligaz sorry just an accidental change 😃 Woud you prefer the old default? Or something else? Please feel free to open a PR to revert / update it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was a general question what's the reason behind this change.
We are defaulting the optimizer to Adam
so my suggestion is to use the default learning rate for it. From the PyTorch docs here:
learning rate (default: 1e-3)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that makes sense! I wonder if there's a way we could just not provide the LR to the optimizer constructor if it is None
? That way if you don't override the LR you would always just get the default for your chosen optimizer. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about the same. It turns out that it defaults to the code in the core Task class which uses 5e-5
as a default value. This is the relevant code.
I'm personally not sure how we should proceed 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we could just default the learning_rate
to None
, and then adjust the logic here: https://github.com/PyTorchLightning/lightning-flash/blob/4c624823d7992aa454b9f064d2714cae9b4a8893/flash/core/model.py#L463
If learning rate is None then the optimizer_kwargs
should just be an empty dict. That would do it I think. Let me know if you'd like to try it otherwise I can take it 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to take it. Thanks for your support 🙏
What does this PR do?
Fixes #1080
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃