Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set find_unused_parameters=False as the default #16611

Merged
merged 11 commits into from
Feb 6, 2023

Conversation

awaelchli
Copy link
Member

@awaelchli awaelchli commented Feb 2, 2023

What does this PR do?

Fixes #7330
Fixes #14486
Closes #12445
Part of #12398

The default value for DDP's find_unused_parameters in PyTorch is False. So far, Lightning has always flipped that to True because it tried to minimize the confusion of users who would otherwise see an error from PyTorch:

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by
making sure all forward function outputs participate in calculating loss.
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable).
Parameter indices which did not receive grad for rank 0: 0 1
In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to either INFO or DETAIL to print out information about which particular parameters did not receive gradient on this rank as part of this error

This error would often occur when users dealt multiple optimizers (GAN) etc. After #16539, this issue is less significant. However, the drawback of setting find_unused_parameters is that you compromise on performance. PyTorch shows a warning:

Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())

But obviously, the warning is pointing at the PyTorch DDP API that the Lightning user doesn't have access to. Thus, the warning is not very helpful.

Since Trainer 2.0 favors performance and parity with PyTorch, we will switch find_unused_parameters to False to align with PyTorch's defaults. Discussion from the past: #6219

Does your PR introduce any breaking changes? If yes, please list them.

In summary, these are the possible configurations a user can run into:

Case 1: User has unused parameters in their forward-backward

  • Trainer < 2.0: No errors, no warnings, expected performance hit
  • Trainer >= 2.0: User gets an error unless they set strategy=ddp_find_unused_parameters=True.

Case 2: User has no unused parameters in their forward-backward

  • Trainer < 2.0: User got a PyTorch warning because ddp_find_unused_parameters=True was the default, unexpected performance hit
  • Trainer >= 2.0: No errors, no warnings, no performance hit

A breaking change occurs for users of Case 1 when switching to >= 2.0.
The rest of users see one less warning and a speed up in their training.

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or internal minor changes/refactorings)

PR review

Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

I made sure I had fun coding 🙃

cc @Borda @justusschock @awaelchli

@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Feb 2, 2023
@awaelchli awaelchli added breaking change Includes a breaking change strategy: ddp DistributedDataParallel and removed pl Generic label for PyTorch Lightning package labels Feb 2, 2023
@awaelchli awaelchli added this to the 2.0 milestone Feb 2, 2023
@awaelchli awaelchli self-assigned this Feb 2, 2023
@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Feb 2, 2023
@coreyjadams coreyjadams mentioned this pull request Feb 2, 2023
15 tasks
@github-actions
Copy link
Contributor

github-actions bot commented Feb 6, 2023

⚡ Required checks status: All passing 🟢

Groups summary

🟢 pytorch_lightning: Tests workflow
Check ID Status
pl-cpu (macOS-11, lightning, 3.8, 1.11) success
pl-cpu (macOS-11, lightning, 3.9, 1.12) success
pl-cpu (macOS-11, lightning, 3.10, 1.13) success
pl-cpu (macOS-11, lightning, 3.8, 1.11, oldest) success
pl-cpu (ubuntu-20.04, lightning, 3.9, 1.11) success
pl-cpu (ubuntu-20.04, lightning, 3.10, 1.12) success
pl-cpu (ubuntu-20.04, lightning, 3.10, 1.13) success
pl-cpu (ubuntu-20.04, lightning, 3.8, 1.11, oldest) success
pl-cpu (windows-2022, lightning, 3.9, 1.11) success
pl-cpu (windows-2022, lightning, 3.10, 1.12) success
pl-cpu (windows-2022, lightning, 3.10, 1.13) success
pl-cpu (windows-2022, lightning, 3.8, 1.11, oldest) success
pl-cpu (slow, macOS-11, lightning, 3.8, 1.11) success
pl-cpu (slow, ubuntu-20.04, lightning, 3.8, 1.11) success
pl-cpu (slow, windows-2022, lightning, 3.8, 1.11) success
pl-cpu (macOS-11, pytorch, 3.8, 1.13) success
pl-cpu (ubuntu-20.04, pytorch, 3.8, 1.13) success
pl-cpu (windows-2022, pytorch, 3.8, 1.13) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py, tests/tests_pytorch/benchmarks/test_sync_batchnorm_parity.py, tests/tests_pytorch/callbacks/test_stochastic_weight_avg.py, tests/tests_pytorch/strategies/test_ddp.py, tests/tests_pytorch/strategies/test_ddp_spawn_strategy.py, tests/tests_pytorch/strategies/test_registry.py, tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py, tests/tests_pytorch/trainer/optimization/test_manual_optimization.py.

🟢 pytorch_lightning: Azure GPU
Check ID Status
pytorch-lightning (GPUs) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py, tests/tests_pytorch/benchmarks/test_sync_batchnorm_parity.py, tests/tests_pytorch/callbacks/test_stochastic_weight_avg.py, tests/tests_pytorch/strategies/test_ddp.py, tests/tests_pytorch/strategies/test_ddp_spawn_strategy.py, tests/tests_pytorch/strategies/test_registry.py, tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py, tests/tests_pytorch/trainer/optimization/test_manual_optimization.py.

🟢 pytorch_lightning: Benchmarks
Check ID Status
pytorch-lightning.Benchmark success

These checks are required after the changes to tests/tests_pytorch/benchmarks/test_sync_batchnorm_parity.py.

🟢 pytorch_lightning: Azure HPU
Check ID Status
pytorch-lightning (HPUs) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py, tests/tests_pytorch/benchmarks/test_sync_batchnorm_parity.py, tests/tests_pytorch/callbacks/test_stochastic_weight_avg.py, tests/tests_pytorch/strategies/test_ddp.py, tests/tests_pytorch/strategies/test_ddp_spawn_strategy.py, tests/tests_pytorch/strategies/test_registry.py, tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py, tests/tests_pytorch/trainer/optimization/test_manual_optimization.py.

🟢 pytorch_lightning: Azure IPU
Check ID Status
pytorch-lightning (IPUs) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py, tests/tests_pytorch/benchmarks/test_sync_batchnorm_parity.py, tests/tests_pytorch/callbacks/test_stochastic_weight_avg.py, tests/tests_pytorch/strategies/test_ddp.py, tests/tests_pytorch/strategies/test_ddp_spawn_strategy.py, tests/tests_pytorch/strategies/test_registry.py, tests/tests_pytorch/trainer/connectors/test_accelerator_connector.py, tests/tests_pytorch/trainer/optimization/test_manual_optimization.py.

🟢 pytorch_lightning: Docs
Check ID Status
make-doctest (pytorch) success
make-html (pytorch) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py, docs/source-pytorch/advanced/model_parallel.rst, docs/source-pytorch/advanced/strategy_registry.rst, docs/source-pytorch/extensions/strategy.rst.

🟢 mypy
Check ID Status
mypy success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py.

🟢 install
Check ID Status
install-pkg (ubuntu-22.04, app, 3.8) success
install-pkg (ubuntu-22.04, app, 3.10) success
install-pkg (ubuntu-22.04, fabric, 3.8) success
install-pkg (ubuntu-22.04, fabric, 3.10) success
install-pkg (ubuntu-22.04, pytorch, 3.8) success
install-pkg (ubuntu-22.04, pytorch, 3.10) success
install-pkg (ubuntu-22.04, lightning, 3.8) success
install-pkg (ubuntu-22.04, lightning, 3.10) success
install-pkg (ubuntu-22.04, notset, 3.8) success
install-pkg (ubuntu-22.04, notset, 3.10) success
install-pkg (macOS-12, app, 3.8) success
install-pkg (macOS-12, app, 3.10) success
install-pkg (macOS-12, fabric, 3.8) success
install-pkg (macOS-12, fabric, 3.10) success
install-pkg (macOS-12, pytorch, 3.8) success
install-pkg (macOS-12, pytorch, 3.10) success
install-pkg (macOS-12, lightning, 3.8) success
install-pkg (macOS-12, lightning, 3.10) success
install-pkg (macOS-12, notset, 3.8) success
install-pkg (macOS-12, notset, 3.10) success
install-pkg (windows-2022, app, 3.8) success
install-pkg (windows-2022, app, 3.10) success
install-pkg (windows-2022, fabric, 3.8) success
install-pkg (windows-2022, fabric, 3.10) success
install-pkg (windows-2022, pytorch, 3.8) success
install-pkg (windows-2022, pytorch, 3.10) success
install-pkg (windows-2022, lightning, 3.8) success
install-pkg (windows-2022, lightning, 3.10) success
install-pkg (windows-2022, notset, 3.8) success
install-pkg (windows-2022, notset, 3.10) success

These checks are required after the changes to src/lightning/pytorch/strategies/ddp.py, src/lightning/pytorch/strategies/ddp_spawn.py, src/lightning/pytorch/strategies/hpu_parallel.py, src/lightning/pytorch/trainer/connectors/accelerator_connector.py.

🟢 link-check
Check ID Status
markdown-link-check success

These checks are required after the changes to src/lightning/pytorch/CHANGELOG.md.


Thank you for your contribution! 💜

Note
This comment is automatically generated and updates for 60 minutes every 180 seconds. If you have any other questions, contact carmocca for help.

@awaelchli awaelchli mentioned this pull request Feb 6, 2023
11 tasks
@justusschock justusschock enabled auto-merge (squash) February 6, 2023 15:45
@mergify mergify bot added the ready PRs ready to be merged label Feb 6, 2023
@justusschock justusschock merged commit cd0eedb into master Feb 6, 2023
@justusschock justusschock deleted the feature/find-unused-params-false branch February 6, 2023 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking change Includes a breaking change pl Generic label for PyTorch Lightning package ready PRs ready to be merged strategy: ddp DistributedDataParallel
Projects
None yet
4 participants