This repository has been archived by the owner on Oct 9, 2023. It is now read-only.
OneCycleLR scheduler does not work with freeze-unfreeze finetuning strategy #1321
Labels
Milestone
🐛 Bug
I wanted to create an image classifier by fine-tuning pre-trained model on my dataset. When OneCycleLR scheduler is used alongside the freeze-unfreeze, training throws an exception once the unfreeze epoch is reached.
To Reproduce / Code Sample
I use flash's built-in
ImageClassifier
as follows:Expected behaviour
After specified number of epochs, layers get unfrozen and training continues.
Actual behaviour
Expection is thrown:
It seems like the unfreezing strategy creates additional optimizer parameter groups, but when the unfreezing happens, some of the LR scheduler parameters are not copied / passed to the new param group properly in:
pytorch_lightning.callbacks.finetuning.BaseFinetuning.unfreeze_and_add_param_group
.Environment
Additional context
https://pytorch-lightning.slack.com/archives/CRBLFHY79/p1651218144224359
The text was updated successfully, but these errors were encountered: