-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ref: move backends back to individual files (1/5) (ddp_cpu) #3712
Conversation
Hello @williamFalcon! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-09-29 05:41:49 UTC |
|
||
def get_device_ids(self): | ||
|
||
class DDPCPUSpawnBackend(Accelerator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
naming nit: if run with torchelastic, then torchelastic does the spawn, not the backend. also this doesn't resolve the bug here: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/accelerators/accelerator_connector.py#L142-L143
we basically need a DDPCPUBackend which takes torchelastic_ddp
as a mode
Codecov Report
@@ Coverage Diff @@
## master #3712 +/- ##
======================================
- Coverage 91% 90% -1%
======================================
Files 110 110
Lines 8206 8303 +97
======================================
+ Hits 7463 7507 +44
- Misses 743 796 +53 |
too complicated to understand and debug the merged backends.
cc @ananthsub
Also freebie @edenafek, raises exception when no valid backend is passed in