-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor plugins backward #8328
Conversation
Codecov Report
@@ Coverage Diff @@
## master #8328 +/- ##
=======================================
- Coverage 92% 88% -4%
=======================================
Files 213 214 +1
Lines 13791 13891 +100
=======================================
- Hits 12732 12223 -509
- Misses 1059 1668 +609 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Just added some notes on potential breaking changes on master, which I think are fine if the follow-up is done quickly :)
So one interesting non-BC change here is that I want to confirm this is an intentional design choice here? For a short-term mitigation, we've changed some of our downstream modules to use the Is this life-cycle detail documented somewhere I might have missed. Edit: To clarify, this is for the FYI @ananthsub |
Hi @kandluis! You are correct, what you pointed out is an intentional design choice. The change does not look as clear by just looking at this PR since it was done in 2 separate PRs. The second part is in #8048. See the associated CHANGELOG updates This is also mentioned in this PR's description:
|
What does this PR do?
pre_backward
andpost_backward
to the precision plugin. Just as we have for the training type plugin.closure_loss
.backward
API to reduce code duplication across plugins and clean the structure.Optional
since they are not available during manual optimization.on_after_backward
hook after backward regardless ofshould_accumulate
on_after_backward
hook after backward when using manual optimizationLightningModule.backward
hook. It was only done for Apex in manual optimizationPart of #7740 which validates that the hook order is correct
Does your PR introduce any breaking changes ? If yes, please list them.
EXPERIMENTAL API:
backward
hook of the precision plugin no longer takes ashould_accumulate
flag.backward
hook of the precision plugin no longer returns a value.pre_backward
andpost_backward
hooks of the training type plugin no longer take optimizer arguments.STABLE API:
on_after_backward
was only called ifshould_accumulate
. It is now called always after backward.on_after_backward
was only called with the unscaled loss on mixed precision. This is no longer the case.NOTES:
LightningModule.backward
hook will still get the optimizers arguments, which are now passed through*args, **kwargs
. So no changes.on_before_optimizer_step
hook #8048) will implementon_before_optimizer_step
which will have the unscaled loss and will be only called if not accumulating. So the users who were relying on the previous behaviour can just update to the new hook.Before submitting
PR review