-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove Strategy.on_tpu
property
#11536
Remove Strategy.on_tpu
property
#11536
Conversation
227edf0
to
4ea1dff
Compare
Pull request was closed
+1 to this change. I realized the on_tpu is not actually used anywhere, only in tests. Other than on_tpu(), the _XLA_AVAILABLE check is only used at import time. Should we have device_availablity check at X_accelerator initialization time? |
Codecov Report
@@ Coverage Diff @@
## master #11536 +/- ##
========================================
- Coverage 92% 88% -4%
========================================
Files 194 194
Lines 16972 16961 -11
========================================
- Hits 15611 14964 -647
- Misses 1361 1997 +636 |
@kaushikb11 what do you think^? In general I am in favor of those checks running as early as possible to avoid wasting users' time. In the GPU accelerator, we do check this here: https://github.com/PyTorchLightning/pytorch-lightning/blob/f41d1e5e5ebb7040a39d137695e818cada9a9234/pytorch_lightning/accelerators/gpu.py#L36-L44 But we don't assert that cuda is available - should we? @four4fish Would this also simplify the accelerator connector initialization logic? Regardless, I'd like to handle it in a separate issue & PR 😄 |
@ananthsub @kaushikb11 I think we can introduce a new function called _device_check() in X_Accelerator, which will be called by accelerator_connector right after accelerator initialization. |
@four4fish @ananthsub I agree. We could add |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
What does this PR do?
Part of #10416
Reasons for removal:
root_device
property. The propertyon_tpu
is redundant yet it's also anabstractmethod
. Requiring every strategy class to implement this is boilerplate.on_ipu
oron_X
for existing accelerator implementationson_gpu
andon_tpu
flags doesn't scale.Does your PR introduce any breaking changes? If yes, please list them.
Yes: removes the
on_tpu
property from the Strategy base class.Reason: The strategy is still an experimental API, and this is part of making it stable
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃