Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup phased transition away from PyTorch version-specific handling of cuda availability and device counting #15133

Merged
merged 4 commits into from
Oct 18, 2022

Conversation

speediedan
Copy link
Contributor

@speediedan speediedan commented Oct 13, 2022

What does this PR do?

Setup a phased transition away from PyTorch version-specific handling of CUDA availability (torch.cuda.is_available()) and device counting (torch.cuda.device_count())

This is a follow-up to #15110 and #85951 that prepares for removal of unnecessary PyTorch version-specific code once PyTorch 1.13 and 1.14 respectively are the minimum supported PyTorch.

Specifically:

  1. All local NVML-based CUDA device counting code copied from upstream PyTorch can be removed once PyTorch 1.13 is the minimum required.
  2. The _patch_cuda_is_available() context manager (and associated usage of it) and redirection of is_cuda_available to num_cuda_devices() can be removed once PyTorch 1.14 is the minimum required.

As of PyTorch 1.14, is_cuda_available() and num_cuda_devices() should be functionally aliases for torch.cuda.is_available() and torch.cuda.num_cuda_devices() respectively that do not require special PyTorch-specific handling and can take advantage of potential upstream enhancements to the functions (e.g., if those functions abstract ROCm CUDA availability checks using ROCm's SMI analog to NVML).

It likely makes sense to retain the is_cuda_available() and num_cuda_devices() wrappers in the code base to facilitate future PyTorch version-specific handling if it becomes necessary but theoretically those wrappers could be replaced with their associated PyTorch functions as well.

Also perhaps relevant, I've tested these changes locally on PyTorch 1.13.0rc3 and 1.14.0.dev20221013

cc @awaelchli @carmocca

Does your PR introduce any breaking changes? If yes, please list them.

None

Before submitting

  • Was this discussed/approved via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

Did you have fun?

I made sure I had fun 🙃

@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Oct 13, 2022
@speediedan speediedan marked this pull request as ready for review October 13, 2022 23:55
@codecov
Copy link

codecov bot commented Oct 14, 2022

Codecov Report

Merging #15133 (27ac19a) into master (05d91c8) will increase coverage by 1%.
The diff coverage is 100%.

Additional details and impacted files
@@            Coverage Diff            @@
##           master   #15133     +/-   ##
=========================================
+ Coverage      82%      84%     +1%     
=========================================
  Files         408      288    -120     
  Lines       29907    21940   -7967     
=========================================
- Hits        24597    18329   -6268     
+ Misses       5310     3611   -1699     

@awaelchli
Copy link
Contributor

@speediedan Not sure if I 100% understand the proposed change. Is it because partial changes are in 1.13, where device_count uses NVML yet is_available is not yet using the new implementation?

@speediedan
Copy link
Contributor Author

@speediedan Not sure if I 100% understand the proposed change. Is it because partial changes are in 1.13, where device_count uses NVML yet is_available is not yet using the new implementation?

Yep! It allows to use the relevant PyTorch functions directly whenever they are available, only using the temporary PL versions when necessary. It also allows us to remove the PL NVML code from upstream once PT 1.13 is the minimum (instead of when PT 1.14 is the minimum).

src/lightning_lite/plugins/precision/native_amp.py Outdated Show resolved Hide resolved
src/pytorch_lightning/plugins/precision/native_amp.py Outdated Show resolved Hide resolved
src/pytorch_lightning/strategies/colossalai.py Outdated Show resolved Hide resolved
@mergify mergify bot added the ready PRs ready to be merged label Oct 18, 2022
@awaelchli awaelchli enabled auto-merge (squash) October 18, 2022 18:44
@awaelchli awaelchli added this to the v1.8 milestone Oct 18, 2022
@awaelchli awaelchli merged commit 776432f into Lightning-AI:master Oct 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pl Generic label for PyTorch Lightning package ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants