-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TPU available: true when there are no TPUs #3104
Comments
sounds like some misconfiguration issue, are interested in sending a PR? 🐰 |
Sure. I realized that the bug is in this script. Specifically:
So, if the environment has |
yes, we had the XLA detection as a temporal solution as we did not expect someone would install XLA without having TPU... |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I faced this issue on gcloud vm with gpu but not tpu.
The torch_xla was installed by default in the Deep learning image and the only solution was to manually uninstall torch_xla |
@realsarm this worked for me, on gcloud with one gpu but not tpu. Thanks a lot :) |
🐛 Bug
I am using a DGX machine (and so, no TPUs), but on initiating
Trainer
, it logsTPU available: True
. This ends up returningMissing XLA configuration
when I run my script.To Reproduce
Code sample
Simply running the following lines on my machine:
Expected behavior
Environment
The text was updated successfully, but these errors were encountered: