-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor 1: moved tpu xxx_step to backend #3118
Conversation
from pytorch_lightning.utilities.apply_func import move_data_to_device | ||
|
||
|
||
class Accelerator(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's make it abstract
Codecov Report
@@ Coverage Diff @@
## master #3118 +/- ##
======================================
- Coverage 90% 90% -0%
======================================
Files 81 82 +1
Lines 7738 7763 +25
======================================
+ Hits 6979 6988 +9
- Misses 759 775 +16 |
why would you move it to Accelerator, it will be the same in each backend or how it would differ? |
yes. only way to make this more general. each backend handles its own hooks. then you can just call a backend directly |
def training_step(self, batch, args): | ||
batch = self.to_device(batch) | ||
args[0] = batch | ||
output = self.trainer.model.training_step(*args) | ||
return output | ||
|
||
def validation_step(self, batch, args): | ||
batch = self.to_device(batch) | ||
args[0] = batch | ||
output = self.trainer.model.validation_step(*args) | ||
return output | ||
|
||
def test_step(self, batch, args): | ||
batch = self.to_device(batch) | ||
args[0] = batch | ||
output = self.trainer.model.test_step(*args) | ||
return output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
then this shall be in abstract class
No description provided.