-
Does the library support parallel GPU computation when training the neural networks. Is there a way to enforce that? |
Beta Was this translation helpful? Give feedback.
Answered by
skababji
May 31, 2024
Replies: 1 comment 1 reply
-
Would be interested on this, any updates? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In your code, define multiple processes and direct each process to a specific GPU using torch.cuda.set_device(<your_gpu_index>)