-
Notifications
You must be signed in to change notification settings - Fork 564
Adding support for torchrun in xla backend #3609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@amithrm Can you provide a test for this new feature? |
|
@JackCaoG sure..will add tests |
|
@JackCaoG I changed the initialization a bit to take into account how slurm configures the devices. Please take a look at it and also the test cases. All of these need would need more modifications after we discuss |
|
I have to admit that I am not an expert of torchrun, let me read up some documentations first lol. Looping in @will-cromar to make sure this does not conflict with our future pjrt runtime. |
|
we did some internal testing. It appears that at scale, we see issues with the set up of GRPC channels. We should understand if you see similar issues at your end too. |
|
@JackCaoG |
|
Run command: GPU_NUM_DEVICES=2 python3 allreduce_xla.py This will output: XRT_LOCAL_WORKER:localservice:0 If you look for XRT_WORKERS, this has the grpc string for each worker. This won't scale with number of workers. |
|
We want to support torch run with new PJRT run time, in the mean time if this torch run utility can unblock aws folks we can also take it. I am a bit hesitant whether claim official support for XRT:TPU + torch run. @will-cromar Let's invesgate what's the gap here, if it is free we might as well just take it. |
will-cromar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't conflict with PJRT, but we will have to add a new code path for it. Runtime configuration is much simpler under PJRT, so I expect that we can simplify a lot of this.
@JackCaoG what do you think of moving this under torch_xla.experimental for now?
test/test_allreduce_torchrun.py
Outdated
| cmd = "torchrun --nproc_per_node=2 --master_addr=127.0.0.1 --master_port=2020 allreduce_torchrun.py " | ||
|
|
||
| new_env0 = os.environ.copy() | ||
| new_env0['NEURON_RT_VISIBLE_CORES'] = '0,1' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with Neuron at all. Do we need to install anything extra beyond PyTorch and PyTorch/XLA to run this test? If not, can you add a GPU CI test like this?
Lines 85 to 90 in 590dee5
| if [ -x "$(command -v nvidia-smi)" ]; then | |
| PJRT_DEVICE=GPU run_test "$@" | |
| else | |
| # TODO(darisoy): run these tests with multiple CPU devices, this fails due to TF issue. | |
| PJRT_DEVICE=CPU CPU_NUM_DEVICES=1 run_test "$@" | |
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, these are neuron specific env variables. Will add the GPU specific ones. But, we have PJRT_DEVICE listed above, will this interfere with the torchrun?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, there shouldn't be an issue. PJRT won't work with this PR, but you can add a new function to run_tests.sh that sets up anything neuron related and skips setting PJRT_DEVICE (i.e. make a run_neuron function like run_pjrt)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @will-cromar Looks like in the CI tests, the file allreduce_torchrun.py is not picked up. What is a good way to fix this? Is there a global prefix that I can add before the file name?
|
in terms of moving the code to experimental I guess it is related to how do we want to set the user expection. Is there any known cavet for this feature? If it works well we don't need to put it in experimental. However we should add a README similar to https://github.com/pytorch/xla/blob/master/docs/ddp.md |
|
@will-cromar can you take another pass of this pr when you have some time? |
will-cromar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Overall LGTM. Please add a CI test (run_tests.sh) and then we can merge this
|
Looks like the file need to test (allreduce_torchrun.py) is not getting picked up. Checking with @will-cromar on how to fix this. And some yapf fixes are pending in one file. |
f263be6 to
e71414a
Compare
|
@will-cromar I see build failure: NameError: name 'sympy' is not defined |
|
weird, head is green right now https://github.com/pytorch/xla/commits/master |
|
Ah ok https://github.com/pytorch/xla/pull/4313/files should fix it, can you rebase again? |
…ated tests" This reverts commit a66060c.
|
@JackCaoG Alll the 4 pass @will-cromar is there anything else needed? |
will-cromar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
JackCaoG
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
This pull request enables the code needed to integrate torchrun launcher with xla backend.