-
Notifications
You must be signed in to change notification settings - Fork 211
Unified and Customizable Finetuning callback using hooks and registry. #830
Unified and Customizable Finetuning callback using hooks and registry. #830
Conversation
Codecov Report
@@ Coverage Diff @@
## master #830 +/- ##
==========================================
+ Coverage 88.19% 88.48% +0.28%
==========================================
Files 250 250
Lines 13175 13133 -42
==========================================
+ Hits 11620 11621 +1
+ Misses 1555 1512 -43
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
b48c59a
to
6bff010
Compare
…g_callback_using_hooks
…g_callback_using_hooks
… add Finetuning registry to enable easier access in models.
for more information, see https://pre-commit.ci
…g_callback_using_hooks
…g_callback_using_hooks
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work, the get_backbone_for_finetuning
hook is really neat. I have some reservations about where the finetune_backbone
hook should live 😃
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
…g_callback_using_hooks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, really like the direction this is going, LGTM 😃 Some small comments
What does this PR do?
Fixes #753
Currently users can't use the existing
modules_to_freeze
implementation that few tasks provide and have to end up writing it back themselves. This PR provides a way for users to be able to do that.Addition of a
FineTuningHook
class which provides a hook,modules_to_freeze
that can be implemented once in the task definitionAfter this, the user only needs to decide the strategy they want to use for
finetune
-ing the model with and the API for this has been made simpler:For users who require more command over their Finetuning implementation, can custom implement by sub-classing
BaseFinetuning
class and providing it as the input for the strategy parameter.Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃