You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setting the trainer flag overfit_batches (e.g. =10) does not overwrite the shuffle flag set in the training dataloader, even though the warning reads: UserWarning: You requested to overfit but enabled training dataloader shuffling. We are turning it off for you.
To Reproduce
Steps to reproduce the behavior:
Create lightning module with method train_dataloader with flag shuffle=True:
set shuffle=False when creating Dataloader in train_dataloader
See that your model converges after some epochs.
(Or log the samples loaded by the dataloader and check if they are the same each epoch.)
Code sample
Expected behavior
Either model also converges with shuffle=True, since warning says that it got overwritten (assuming model converges with shuffle=False) or at least warning should read that user has to change shuffle to False.
Hi! thanks for your contribution!, great first issue!
p-wein
changed the title
Trainer flag overfit_batches does not overwrite train dataloaders shuffle flag as stated in warning.
Trainer flag overfit_batches does not overwrite train dataloaders shuffle flag
Jul 14, 2020
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I am seeing the same issue when using --overfit_pct. From a comment in the code, I believe that option is to be removed in 1.0.0, but is it worth it to fix it anyways? The same code will do fix the issue just checking self.overfit_pct instead.
🐛 Bug
Setting the trainer flag overfit_batches (e.g. =10) does not overwrite the shuffle flag set in the training dataloader, even though the warning reads:
UserWarning: You requested to overfit but enabled training dataloader shuffling. We are turning it off for you.
To Reproduce
Steps to reproduce the behavior:
( I use a rising dataloader, bug should also occur with pytorch dataloaders though)
(Or log the samples loaded by the dataloader and check if they are the same each epoch.)
Code sample
Expected behavior
Either model also converges with shuffle=True, since warning says that it got overwritten (assuming model converges with shuffle=False) or at least warning should read that user has to change shuffle to False.
Environment
- GPU:
- GeForce GTX 1080 Ti
- available: True
- version: 10.1
- numpy: 1.19.0
- pyTorch_debug: False
- pyTorch_version: 1.7.0.dev20200705+cu101
- pytorch-lightning: 0.8.5
- tensorboard: 2.2.2
- tqdm: 4.47.0
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.7.7
- version: docs: enable syntax highlight #109-Ubuntu SMP Fri Jun 19 11:33:10 UTC 2020
Additional context
The text was updated successfully, but these errors were encountered: