Skip to content

Conversation

@jeffhataws
Copy link
Contributor

What does this PR do?

This PR enables torchrun for XLA-based accelerators (TPU/NeuronCore) by using torch.distributed XLA backend. It is dependent on the torch/xla change pytorch/xla#3609.

Example application is the AWS Neuron tutorial with HF Trainer that uses torchrun:

https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@sgugger

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR! There are a few problems with it though, as it breaks current support for PyTorch XLA.

Comment on lines 1328 to 1331
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No the model needs to be wrapped in a DistributedDataParallel for TPU,

Copy link
Contributor Author

@jeffhataws jeffhataws Oct 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Limited the case to torchrun in the latest commit and clarified that currently DDP doesn't work with torch.distributed XLA backend yet (pytorch/pytorch#79164)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case we should wait until it's supported. This will break the current TPU support.

Comment on lines +62 to +65
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be moved to the _setup_devices method of TrainingArguments. It shouldn't be done on import.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Unfortunately I can't move it into _setup_devices for some reason as it run into device not found error. Will investigate.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any runner testing on XLA for now, so this test will never be run FYI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Will get help with running these tests.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@jeffhataws jeffhataws force-pushed the add_aws_neuron_torchrun_support branch from f882cb6 to f339bd9 Compare November 29, 2022 23:25
@github-actions github-actions bot closed this Dec 8, 2022
@jeffhataws jeffhataws mentioned this pull request Dec 17, 2022
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants