Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accelerator Refactor: Precision Plugins #5718

Merged
merged 14 commits into from
Jan 31, 2021
Prev Previous commit
Next Next commit
add tpu bfloat
Co-authored-by: Adrian Wälchli <aedu.waelchli@gmail.com>
justusschock and awaelchli committed Jan 31, 2021

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
commit 1b82554f74d661adfd7356ff7c162811b3775dbf
28 changes: 28 additions & 0 deletions pytorch_lightning/plugins/precision/tpu_bfloat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Copyright The PyTorch Lightning team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os

import torch

from pytorch_lightning.plugins.precision.precision_plugin import PrecisionPlugin


class TPUHalfPrecisionPlugin(PrecisionPlugin):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ooo I didn't see this, this is nice :)

"""Plugin that enables bfloats on TPUs"""

precision = 16

def connect(self, model: torch.nn.Module, optimizers, lr_schedulers):
os.environ["XLA_USE_BF16"] = str(1)
return super().connect(model=model, optimizers=optimizers, lr_schedulers=lr_schedulers)