You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.
return {
"scheduler": None,
"name": None, # no custom name"interval": "epoch", # after epoch is over"frequency": 1, # every epoch/batch"reduce_on_plateau": False, # most often not ReduceLROnPlateau scheduler"monitor": None, # value to monitor for ReduceLROnPlateau"strict": True, # enforce that the monitor exists for ReduceLROnPlateau"opt_idx": None, # necessary to store opt_idx when optimizer frequencies are specified
}
A possible solution API to support scheduler on step would be to provide the entire default dict while instantiating as follow.
@tchaton I think support for this would be good. I spoke with @karthikrangasai recently about a revamp for the whole experience around optimizers / schedulers. A couple of ideas we came up with:
allow for callables to be passed, so users can do something like this: functools.partial(MultiStepLR, milestones=[100, 150]) - rather than providing a kwargs dictionary
pre-register some good standard scheduler configurations in the scheduler registry, and allow (/ document) extending this. E.g. you could do:
🐛 Bug
Currently, in Lightning Flash, optimizer and scheduler creation is being automated as follow.
However, in PyTorch Lightning, someone would have to do the follow to enable training on step or monitoring for a scheduler.
Here is the default scheduler dict.
A possible solution API to support scheduler on step would be to provide the entire default dict while instantiating as follow.
or
To Reproduce
Steps to reproduce the behavior:
Code sample
Expected behavior
Environment
conda
,pip
, source):Additional context
The text was updated successfully, but these errors were encountered: