Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor 1: moved tpu xxx_step to backend #3118

Merged
merged 4 commits into from
Aug 24, 2020
Merged

Refactor 1: moved tpu xxx_step to backend #3118

merged 4 commits into from
Aug 24, 2020

Conversation

williamFalcon
Copy link
Contributor

No description provided.

@williamFalcon williamFalcon changed the title moved tpu training_step to backend Refactor 1: moved tpu training_step to backend Aug 24, 2020
@mergify mergify bot requested a review from a team August 24, 2020 10:22
@williamFalcon williamFalcon changed the title Refactor 1: moved tpu training_step to backend Refactor 1: moved tpu xxx_step to backend Aug 24, 2020
from pytorch_lightning.utilities.apply_func import move_data_to_device


class Accelerator(object):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's make it abstract

@mergify mergify bot requested a review from a team August 24, 2020 10:58
@codecov
Copy link

codecov bot commented Aug 24, 2020

Codecov Report

Merging #3118 into master will decrease coverage by 0%.
The diff coverage is 39%.

@@          Coverage Diff           @@
##           master   #3118   +/-   ##
======================================
- Coverage      90%     90%   -0%     
======================================
  Files          81      82    +1     
  Lines        7738    7763   +25     
======================================
+ Hits         6979    6988    +9     
- Misses        759     775   +16     

@Borda
Copy link
Member

Borda commented Aug 24, 2020

why would you move it to Accelerator, it will be the same in each backend or how it would differ?

@williamFalcon
Copy link
Contributor Author

yes. only way to make this more general. each backend handles its own hooks. then you can just call a backend directly

@williamFalcon williamFalcon merged commit 3c88b0d into master Aug 24, 2020
@Borda Borda deleted the ref0 branch August 24, 2020 11:11
Comment on lines +121 to +137
def training_step(self, batch, args):
batch = self.to_device(batch)
args[0] = batch
output = self.trainer.model.training_step(*args)
return output

def validation_step(self, batch, args):
batch = self.to_device(batch)
args[0] = batch
output = self.trainer.model.validation_step(*args)
return output

def test_step(self, batch, args):
batch = self.to_device(batch)
args[0] = batch
output = self.trainer.model.test_step(*args)
return output
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then this shall be in abstract class

@mergify mergify bot requested a review from a team August 24, 2020 11:12
@Borda Borda added the refactor label Aug 24, 2020
@Borda Borda added this to the 0.9.x milestone Aug 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants