Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix checkpointed state for lr_schedulers with step interval #7877

Merged
merged 39 commits into from
Jun 21, 2021

Conversation

simran2905
Copy link
Contributor

@simran2905 simran2905 commented Jun 8, 2021

What does this PR do?

Fixes #7637

The lr schedulers are being updated after calling on_train_batch_end and trainer._run_evaluation(), both of which can save checkpoints. This means that the latest state of lr_scheduler is not saved during checkpoint. This PR fixes the issue for classic schedulers.

NOTE: The ReduceLROnPlateau schedulers need on_train_batch_end called first for any metrics logging and will continue to save an older state potentially.

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

Make sure you had fun coding 🙃

@pep8speaks
Copy link

pep8speaks commented Jun 8, 2021

Hello @simran2905! Thanks for updating this PR.

There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻

Comment last updated at 2021-06-21 10:13:15 UTC

@codecov
Copy link

codecov bot commented Jun 8, 2021

Codecov Report

Merging #7877 (bbbfce7) into master (2303f9c) will decrease coverage by 5%.
The diff coverage is 100%.

@@           Coverage Diff           @@
##           master   #7877    +/-   ##
=======================================
- Coverage      92%     87%    -5%     
=======================================
  Files         210     210            
  Lines       13579   13581     +2     
=======================================
- Hits        12452   11812   -640     
- Misses       1127    1769   +642     

Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, looks good to me

tests/trainer/optimization/test_optimizers.py Outdated Show resolved Hide resolved
@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@simran2905 simran2905 changed the base branch from master to bugfix/avoid-warnings June 8, 2021 17:30
@simran2905 simran2905 changed the base branch from bugfix/avoid-warnings to master June 8, 2021 17:30
@awaelchli
Copy link
Contributor

moved your changes from pytorch_lightning/trainer/training_loop.py to pytorch_lightning/loops/training_epoch_loop.py

Copy link
Contributor

@carmocca carmocca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushed a few minor changes. Almost ready

@carmocca carmocca modified the milestones: v1.3.x, v1.4 Jun 15, 2021
@carmocca
Copy link
Contributor

Changed the milestone to 1.4 as this requires several refactors not in the bug-fix branch

@simran2905
Copy link
Contributor Author

Thanks @awaelchli and @carmocca for the merges.

@carmocca carmocca added the ready PRs ready to be merged label Jun 17, 2021
@carmocca carmocca self-assigned this Jun 17, 2021
@mergify mergify bot removed the has conflicts label Jun 18, 2021
@carmocca carmocca enabled auto-merge (squash) June 19, 2021 00:14
Copy link
Contributor

@tchaton tchaton left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

@tchaton
Copy link
Contributor

tchaton commented Jun 21, 2021

Hey @simran2905,

quick question: Why differentiate reduce_on_plateau schedulers from the rest. Can't we call update_learning_rates only after callback metrics has been populated for all schedulers ?

Best,
T.C

@carmocca
Copy link
Contributor

quick question: Why differentiate reduce_on_plateau schedulers from the rest. Can't we call update_learning_rates only after callback metrics has been populated for all schedulers ?

See the discussion in the linked issue: #7637 (comment)

@carmocca carmocca merged commit d1efae2 into Lightning-AI:master Jun 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

LR scheduler steps after saving checkpoint with iteration-based checkpointing
7 participants