[ROCm] Fix: Conditionally import torch.distributed to support builds without distributed#4017
[ROCm] Fix: Conditionally import torch.distributed to support builds without distributed#40170xDELUXA wants to merge 1 commit intopytorch:mainfrom
torch.distributed to support builds without distributed#4017Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4017
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @0xDELUXA! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
torch.distributed in nf4tensor.py, adam.py, and quant_utils.py
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
torch.distributed in nf4tensor.py, adam.py, and quant_utils.pytorch.distributed in nf4tensor.py, adam.py, and quant_utils.py
5b3d502 to
00ab8af
Compare
e38b36d to
d21f924
Compare
torch.distributed in nf4tensor.py, adam.py, and quant_utils.pytorch.distributed to support builds without distributed
9525be5 to
d21289f
Compare
d21289f to
7f5f895
Compare
|
Tracked down all issues preventing |
What does this PR do?
torch.distributedsubmodules are not available in all PyTorch builds (for example, Windows ROCm). Importing them unconditionally at the top level causes an immediate crash with:This makes
import torchaofail entirely on those builds, even for users who have no intention of using distributed training.Prior art
The same fix was applied across the HuggingFace ecosystem:
torch.distributed.fsdpintrainer_seq2seq.pyhuggingface/transformers#44507torch.distributed.algorithms.joininaccelerator.pyhuggingface/accelerate#3962Related issues
torch._C._distributed_c10dhuggingface/diffusers#12409cc @danielvegamyhre @BowenBao