-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cast on host instead of IPU when using precision=16
#13880
Conversation
for more information, see https://pre-commit.ci
# We don't call `super().batch_to_device` because `data.to(device)` is not | ||
# currently necessary for IPUs. The movement of data from host<->IPU is | ||
# currently handled by PopTorch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if that's the case, we need to do a follow-up for users to let them know that some of the hooks won't work with IPUs, like transfer_batch_to_device
, ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this something you could do? I don't know the codebase well enough to know all the implications.
Codecov Report
@@ Coverage Diff @@
## master #13880 +/- ##
=========================================
+ Coverage 49% 75% +26%
=========================================
Files 332 332
Lines 26010 26810 +800
=========================================
+ Hits 12728 20078 +7350
+ Misses 13282 6732 -6550 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
# This override is necessary because the cast must occur before the data | ||
# is moved to the device to prevent wasteful host->device copies. | ||
if self.precision_plugin.precision in (PrecisionType.MIXED, PrecisionType.HALF): | ||
batch = apply_to_collection(batch, (FloatTensor, torch.cuda.FloatTensor), function=Tensor.half) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
batch = apply_to_collection(batch, (FloatTensor, torch.cuda.FloatTensor), function=Tensor.half) | |
batch = apply_to_collection(batch, Tensor, function=Tensor.half) |
if we need to move all kinds of tensors for IPUs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the change that caused #13983?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's part of it, but before e.g. DoubleTensors, BfloatTensors etc. wouldn't have been cast as well.
Head branch was pushed to by a user without write access
Co-authored-by: Rohit Gupta <[email protected]>
Co-authored-by: Rohit Gupta <[email protected]>
Co-authored-by: Rohit Gupta <[email protected]>
What does this PR do?
Removes the cast to half from the forward pass and adds it to the host side pre-processing.
In overloading
batch_to_device
it also preventsdata.to(device)
, which is not currently necessary for IPU.Fixes #13828
Does your PR introduce any breaking changes? If yes, please list them.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃