-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support deterministic="warn" in Trainer for Pytorch 1.11+ #12588
Support deterministic="warn" in Trainer for Pytorch 1.11+ #12588
Conversation
In Pytorch 1.11, operations that do not have a deterministic implementation can be set to throw a warning instead of an error when ran in deterministic mode. See https://pytorch.org/docs/1.11/generated/torch.use_deterministic_algorithms.html
Full error message `pytorch_lightning/trainer/connectors/accelerator_connector.py:216: error: Unexpected keyword argument "warn_only" for "use_deterministic_algorithms" [call-arg]` is thrown because code checks run using Pytorch 1.10 but this should be checked in Pytorch 1.11+
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! Pushed a commit with some minor changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice addition!
The type check is failing in CI: https://github.com/PyTorchLightning/pytorch-lightning/runs/5853483314?check_suite_focus=true
Once the failure is resolved, LGTM!
Cool, thanks everyone for sorting out the type hints! Great to learn about "Literal" types 😆 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this is a confusing feature. Either I want determinism or not. Something in-between like "warn" does not exist for me. Perhaps I'm missing the use case here.
This thread might be a useful read: pytorch/pytorch#64883. Ideally yes, full determinism would be desirable, but in practice, not every algorithm has a deterministic implementation and this gets us 90% of the way there (depending on the model being used). I'm coming at it more from a reproducibility standpoint, it's better to be able to reproduce a model's result by 90% than to not be able to reproduce it at all because of all the randomness going on. |
Hi, nice update which is useful for me. Could it be that the add_argparse_args() method in Trainer needs to be adapted as well? Currently, it accepts only boolean values for the deterministic keyword. At least I don't know how to pass the warn flag using argparse. Thanks! |
Hmm, looking at |
Yes, I did install from github today. I have not had a look at the source code, but I observe that when I run something like |
@mareikethies any particular reason you want to use |
Thanks @mareikethies, that error message is very helpful. I can confirm the
import argparse
import pytorch_lightning as pl
import torch
class BasicNet(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(2, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
output = self(batch)
loss = torch.nn.functional.mse_loss(batch, output)
return {"loss": loss}
def configure_optimizers(self):
return torch.optim.Adam(params=self.parameters(), lr=0.1, weight_decay=0.0005)
class RandomDataset(torch.utils.data.Dataset):
def __init__(self):
pass
def __len__(self):
return 1
def __getitem__(self, idx: int):
return torch.randn(2)
def cli_main():
parser = argparse.ArgumentParser()
parser = pl.Trainer.add_argparse_args(parent_parser=parser)
args = parser.parse_args()
trainer = pl.Trainer.from_argparse_args(args=args, max_epochs=1, accelerator="auto")
model = BasicNet()
dataloader = torch.utils.data.DataLoader(dataset=RandomDataset())
trainer.fit(model=model, train_dataloaders=dataloader)
if __name__ == "__main__":
cli_main()
So my guess is that the code here might need to change: |
What does this PR do?
In Pytorch 1.11, operations that do not have a deterministic implementation can be set to throw a warning instead of an error when ran in deterministic mode. See https://pytorch.org/docs/1.11/generated/torch.use_deterministic_algorithms.html
Fixes #<issue_number>
Does your PR introduce any breaking changes? If yes, please list them.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃