Skip to content

Conversation

@ryan597
Copy link
Contributor

@ryan597 ryan597 commented Dec 18, 2023

What does this PR do?

Reset trainer variable should_stop when fit is called

If fit is called after early stopping has already stopped training, then the model will not continue training as the trainer flag should_stop is currently not reset when fit is called.
Change this to reset should_stop every time fit is called

Fixes #18727

Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--19177.org.readthedocs.build/en/19177/

@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Dec 18, 2023
@ryan597
Copy link
Contributor Author

ryan597 commented Dec 18, 2023

seems this is failing on a test that is designed to make sure the trainer stays as should_stop=True related to #15708

@pytest.mark.parametrize(("min_epochs", "min_steps", "val_count"), [(3, None, 3), (None, 3, 2)])
def test_should_stop_triggers_validation_once(min_epochs, min_steps, val_count, tmp_path):
    """Regression test for issue #15708.

    Test that the request for `should_stop=True` only triggers validation when Trainer is allowed to stop
    (min_epochs/steps is satisfied).

    """
    model = BoringModel()
    trainer = Trainer(
        default_root_dir=tmp_path,
        num_sanity_val_steps=0,
        limit_val_batches=2,
        limit_train_batches=2,
        max_epochs=3,
        min_epochs=min_epochs,
        min_steps=min_steps,
        enable_model_summary=False,
        enable_checkpointing=False,
    )
    trainer.should_stop = True  # Request to stop before min_epochs/min_steps are reached
    trainer.fit_loop.epoch_loop.val_loop.run = Mock()
    trainer.fit(model)
    assert trainer.fit_loop.epoch_loop.val_loop.run.call_count == val_count

@ryan597
Copy link
Contributor Author

ryan597 commented Dec 18, 2023

I have changed the above test to use an EarlyStopping condition instead of setting the flag through trainer.should_stop=True such that this test now passes with the following

+    class NewBoring(BoringModel):
+        def training_step(self, batch, batch_idx):
+            self.log("loss", self.step(batch))
+            return {"loss": self.step(batch)}
+
-    model = BoringModel()
+    model = NewBoring()
+    # create a stopping condition with a high threshold so it triggers immediately
+    # check the condition before validation so the count is unaffected
+    stopping = EarlyStopping(monitor="loss", check_on_train_epoch_end=True, stopping_threshold=100)
     trainer = Trainer(
        default_root_dir=tmp_path,
        num_sanity_val_steps=0,
        limit_val_batches=2,
        limit_train_batches=2,
        max_epochs=3,
        min_epochs=min_epochs,
        min_steps=min_steps,
        enable_model_summary=False,
        enable_checkpointing=False,
        callbacks=[stopping],
    )
-   trainer.should_stop = True  # Request to stop before min_epochs/min_steps are reached
    trainer.fit_loop.epoch_loop.val_loop.run = Mock()
    trainer.fit(model)
    assert trainer.fit_loop.epoch_loop.val_loop.run.call_count == val_count

@awaelchli awaelchli added the community This PR is from the community label Dec 19, 2023
@gitguardian
Copy link

gitguardian bot commented Jan 16, 2024

️✅ There are no secrets present in this pull request anymore.

If these secrets were true positive and are still valid, we highly recommend you to revoke them.
Once a secret has been leaked into a git repository, you should consider it compromised, even if it was deleted immediately.
Find here more information about risks.


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Our GitHub checks need improvements? Share your feedbacks!

@mergify mergify bot removed the has conflicts label Feb 16, 2024
@codecov
Copy link

codecov bot commented Feb 16, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 48%. Comparing base (2a827f3) to head (005209c).
Report is 347 commits behind head on master.

❗ There is a different number of reports uploaded between BASE (2a827f3) and HEAD (005209c). Click for more details.

HEAD has 179 uploads less than BASE
Flag BASE (2a827f3) HEAD (005209c)
lightning 36 13
cpu 66 21
pytest 51 2
python3.10 17 9
gpu 4 2
lightning_fabric 10 0
python3.8 12 6
python3.11 17 6
app 9 0
examples 9 0
tpu 1 0
lightning_app 6 0
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #19177      +/-   ##
==========================================
- Coverage      83%      48%     -35%     
==========================================
  Files         450      442       -8     
  Lines       38250    38098     -152     
==========================================
- Hits        31893    18438   -13455     
- Misses       6357    19660   +13303     
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@qqueing
Copy link
Contributor

qqueing commented Jul 16, 2024

is this PR in progress??

@Borda Borda merged commit 40c682e into Lightning-AI:master Mar 14, 2025
Borda pushed a commit that referenced this pull request Mar 18, 2025
---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 40c682e)
lexierule pushed a commit that referenced this pull request Mar 18, 2025
---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
(cherry picked from commit 40c682e)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community This PR is from the community pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

EarlyStopping not updating it's value after resuming training

4 participants