-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make manual optimization mandatory for multiple optimizers #16539
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
awaelchli
force-pushed
the
removal/optimizer-loop
branch
from
January 29, 2023 03:31
4420593
to
973c197
Compare
awaelchli
commented
Jan 29, 2023
awaelchli
commented
Jan 29, 2023
awaelchli
commented
Jan 29, 2023
awaelchli
commented
Jan 29, 2023
for more information, see https://pre-commit.ci
awaelchli
commented
Jan 29, 2023
awaelchli
commented
Jan 29, 2023
tests/tests_pytorch/trainer/optimization/test_multiple_optimizers.py
Outdated
Show resolved
Hide resolved
awaelchli
commented
Jan 29, 2023
awaelchli
commented
Jan 30, 2023
tests/tests_pytorch/trainer/dynamic_args/test_multiple_eval_dataloaders.py
Show resolved
Hide resolved
awaelchli
commented
Jan 30, 2023
awaelchli
changed the title
WIP: Make manual optimization mandatory for multiple optimizers
Make manual optimization mandatory for multiple optimizers
Jan 30, 2023
carmocca
approved these changes
Jan 31, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Huge effort!
tests/tests_pytorch/trainer/dynamic_args/test_multiple_eval_dataloaders.py
Show resolved
Hide resolved
justusschock
approved these changes
Jan 31, 2023
Co-authored-by: Carlos Mocholí <[email protected]>
for more information, see https://pre-commit.ci
11 tasks
Borda
approved these changes
Feb 1, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quite long but over all looks good 👍
11 tasks
mergify
bot
added
ready
PRs ready to be merged
and removed
has conflicts
ready
PRs ready to be merged
labels
Feb 1, 2023
for more information, see https://pre-commit.ci
This was referenced Feb 1, 2023
11 tasks
Closed
12 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
breaking change
Includes a breaking change
loops
Related to the Loop API
optimization
pl
Generic label for PyTorch Lightning package
ready
PRs ready to be merged
refactor
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do?
WARNING Estimated time to review: 2 hours + 🤣.
This PR reworks how optimization with multiple optimizers works.
Closes #12169
Before
User returns multiple optimizers from the
configure_optimizers
hookThe training step gets a
optimizer_idx
argument and gets called as many times with the same batch as there are optimizers.Pro:
zero_grad
andstep
callsCons:
Now
Returning multiple optimizers from
configure_optimizers
hook will raise an error. You need to setself.automatic_optimization=False
and implement manual optimization in your training step.Pro:
Cons:
zero_grad
andstep
andmanual_backward
manually.Does your PR introduce any breaking changes? If yes, please list them.
opt_idx
argument fromBaseFinetuning.finetune_function
callback methodopt_idx
argument fromCallback.on_before_optimizer_step
callback methodoptimizer_idx
as an optional argument inLightningModule.training_step
optimizer_idx
argument fromLightningModule.on_before_optimizer_step
optimizer_idx
argument fromLightningModule.configure_gradient_clipping
optimizer_idx
argument fromLightningModule.optimizer_step
optimizer_idx
argument fromLightningModule.optimizer_zero_grad
optimizer_idx
argument fromLightningModule.lr_scheduler_step
LightningModule.configure_optimizers
optimizer
andoptimizer_idx
fromLightningModule.backward
optimizer_idx
argument fromPrecisionPlugin.optimizer_step
and all of its overrides in subclassesoptimizer_idx
argument fromPrecisionPlugin.{optimizer_step,backward}
and all of its overrides in subclassesoptimizer_idx
argument fromStrategy.{optimizer_step,backward}
and all of its overrides in subclassesTrainer.optimizer_frequencies
attributeFollow ups to this PR:
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
I made sure I had fun coding 🙃
cc @justusschock @awaelchli @Borda @carmocca