Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Conversation

karthikrangasai
Copy link
Contributor

@karthikrangasai karthikrangasai commented Oct 4, 2021

What does this PR do?

Fixes #753

Currently users can't use the existing modules_to_freeze implementation that few tasks provide and have to end up writing it back themselves. This PR provides a way for users to be able to do that.

Addition of a FineTuningHook class which provides a hook,

  • modules_to_freeze that can be implemented once in the task definition

After this, the user only needs to decide the strategy they want to use for finetune-ing the model with and the API for this has been made simpler:

# No Freeze Strategy:
trainer.finetune(model, datamodule=datamodule, strategy="no_freeze")

# Freeze Strategy:
trainer.finetune(model, datamodule=datamodule, strategy="freeze")

# Freeze Unfreeze Strategy - requires epoch number to unfreeze the bacbone:
trainer.finetune(model, datamodule=datamodule, strategy=("freeze_unfreeze", 10))

# UnFreeze Milestones Strategy - Needs epoch numbers and number of layers to unfreeze at once:
trainer.finetune(model, datamodule=datamodule, strategy=("unfreeze_milestones", ((5, 10), 15)))

For users who require more command over their Finetuning implementation, can custom implement by sub-classing BaseFinetuning class and providing it as the input for the strategy parameter.

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes?
  • Did you write any new necessary tests? [not needed for typos/docs]
  • Did you verify new and existing tests pass locally with your changes?
  • If you made a notable change (that affects users), did you update the CHANGELOG?

PR review

  • Is this pull request ready for review? (if not, please submit in draft mode)

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@codecov
Copy link

codecov bot commented Oct 4, 2021

Codecov Report

Merging #830 (2cc13df) into master (ec4b3b5) will increase coverage by 0.28%.
The diff coverage is 68.58%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #830      +/-   ##
==========================================
+ Coverage   88.19%   88.48%   +0.28%     
==========================================
  Files         250      250              
  Lines       13175    13133      -42     
==========================================
+ Hits        11620    11621       +1     
+ Misses       1555     1512      -43     
Flag Coverage Δ
unittests 88.48% <68.58%> (+0.28%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
flash/text/seq2seq/core/__init__.py 100.00% <ø> (ø)
flash/text/seq2seq/core/model.py 68.88% <23.07%> (-7.95%) ⬇️
flash/text/question_answering/finetuning.py 23.52% <23.52%> (+3.74%) ⬆️
flash/image/face_detection/model.py 37.17% <60.00%> (-2.11%) ⬇️
flash/pointcloud/segmentation/model.py 82.85% <66.66%> (+6.38%) ⬆️
flash/video/classification/model.py 83.58% <66.66%> (-3.17%) ⬇️
flash/core/finetuning.py 92.20% <91.42%> (+3.31%) ⬆️
flash/core/model.py 87.91% <96.15%> (+0.29%) ⬆️
flash/core/hooks.py 100.00% <100.00%> (ø)
flash/core/trainer.py 88.00% <100.00%> (-0.73%) ⬇️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ec4b3b5...2cc13df. Read the comment docs.

@karthikrangasai karthikrangasai force-pushed the feature/unified_and_customizable_finetuning_callback_using_hooks branch from b48c59a to 6bff010 Compare October 17, 2021 02:31
@karthikrangasai karthikrangasai changed the title [WIP] Unified and Customizable Finetuning callback using hooks. [WIP] Unified and Customizable Finetuning callback using hooks and registry. Nov 1, 2021
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@ethanwharris ethanwharris added the enhancement New feature or request label Nov 8, 2021
Copy link
Collaborator

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work, the get_backbone_for_finetuning hook is really neat. I have some reservations about where the finetune_backbone hook should live 😃

docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
flash/core/finetuning.py Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
@mergify mergify bot removed the has conflicts label Nov 12, 2021
Copy link
Collaborator

@ethanwharris ethanwharris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, really like the direction this is going, LGTM 😃 Some small comments

docs/source/api/core.rst Outdated Show resolved Hide resolved
docs/source/general/finetuning.rst Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/model.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/trainer.py Outdated Show resolved Hide resolved
flash/core/model.py Outdated Show resolved Hide resolved
docs/source/api/core.rst Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/finetuning.py Outdated Show resolved Hide resolved
flash/core/model.py Outdated Show resolved Hide resolved
flash/core/model.py Show resolved Hide resolved
@ethanwharris ethanwharris merged commit a0c97a3 into Lightning-Universe:master Nov 19, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RFC] Enable users of Lightning Flash to customize the finetuning callback using Hooks
3 participants