Skip to content

PyTorch Lightning 1.6.3: Standard patch release

Compare
Choose a tag to compare
@carmocca carmocca released this 03 May 20:36
· 3724 commits to release/stable since this release

[1.6.3] - 2022-05-03

Fixed

  • Use only a single instance of rich.console.Console throughout codebase (#12886)
  • Fixed an issue to ensure all the checkpoint states are saved in a common filepath with DeepspeedStrategy (#12887)
  • Fixed trainer.logger deprecation message (#12671)
  • Fixed an issue where sharded grad scaler is passed in when using BF16 with the ShardedStrategy (#12915)
  • Fixed an issue wrt recursive invocation of DDP configuration in hpu parallel plugin (#12912)
  • Fixed printing of ragged dictionaries in Trainer.validate and Trainer.test (#12857)
  • Fixed threading support for legacy loading of checkpoints (#12814)
  • Fixed pickling of KFoldLoop (#12441)
  • Stopped optimizer_zero_grad from being called after IPU execution (#12913)
  • Fixed fuse_modules to be qat-aware for torch>=1.11 (#12891)
  • Enforced eval shuffle warning only for default samplers in DataLoader (#12653)
  • Enable mixed precision in DDPFullyShardedStrategy when precision=16 (#12965)
  • Fixed TQDMProgressBar reset and update to show correct time estimation (#12889)
  • Fixed fit loop restart logic to enable resume using the checkpoint (#12821)

Contributors

@akihironitta @carmocca @hmellor @jerome-habana @kaushikb11 @krshrimali @mauvilsa @niberger @ORippler @otaj @rohitgr7 @SeanNaren