-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auto enable TPU checkpoint save and checkpoint load using the proper wrappers #1363
Comments
ummm great point. @dlibenzi any docs on this? |
The recommended way to save checkpoint is to use the Saved tensors are PyTorch CPU tensor. |
i thought we had automated this. let’s turn this PR into those feature requests |
will be fixed in #2726 |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
I trained a model on TPU with google colab. When trying to load the checkpoint on GPU it gives following error
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'XLATensorId' backend. 'aten::empty.memory_format' is only available for these backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId, CPUTensorId, MkldnnCPUTensorId, SparseCUDATensorId].
How to load the checkpoint saved on TPU with cpu/gpu ?
The text was updated successfully, but these errors were encountered: