Skip to content

Commit

Permalink
Fix Configuring Learning Rate Schedulers (#1177)
Browse files Browse the repository at this point in the history
* Update docs so users know the desired manner of configuring learning rate schedulers.

* update list

* as note

Co-authored-by: Jirka Borovec <[email protected]>
  • Loading branch information
authman and Borda authored Mar 19, 2020
1 parent 01b8991 commit 711892a
Show file tree
Hide file tree
Showing 2 changed files with 30 additions and 17 deletions.
14 changes: 14 additions & 0 deletions docs/source/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,20 @@ Every optimizer you use can be paired with any `LearningRateScheduler <https://p
def configure_optimizers(self):
return [Adam(...), SGD(...)], [ReduceLROnPlateau(), LambdaLR()]
# Same as above with additional params passed to the first scheduler
def configure_optimizers(self):
optimizers = [Adam(...), SGD(...)]
schedulers = [
{
'scheduler': ReduceLROnPlateau(mode='max', patience=7),
'monitor': 'val_recall', # Default: val_loss
'interval': 'epoch',
'frequency': 1
},
LambdaLR()
]
return optimizers, schedulers
Use multiple optimizers (like GANs)
-------------------------------------
Expand Down
33 changes: 16 additions & 17 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -942,28 +942,27 @@ def configure_optimizers(self):
dis_sched = CosineAnnealing(discriminator_opt, T_max=10) # called every epoch
return [gen_opt, dis_opt], [gen_sched, dis_sched]
Some things to know
.. note:: Some things to note:
- Lightning calls ``.backward()`` and ``.step()`` on each optimizer
and learning rate scheduler as needed.
and learning rate scheduler as needed.
- If you use 16-bit precision (``precision=16``), Lightning will automatically
handle the optimizers for you.
- If you use multiple optimizers, training_step will have an additional
``optimizer_idx`` parameter.
- If you use LBFGS lightning handles the closure function automatically for you
handle the optimizers for you.
- If you use multiple optimizers, training_step will have an additional ``optimizer_idx`` parameter.
- If you use LBFGS lightning handles the closure function automatically for you.
- If you use multiple optimizers, gradients will be calculated only
for the parameters of current optimizer at each training step.
for the parameters of current optimizer at each training step.
- If you need to control how often those optimizers step or override the
default .step() schedule, override the `optimizer_step` hook.
default .step() schedule, override the `optimizer_step` hook.
- If you only want to call a learning rate scheduler every `x` step or epoch,
you can input this as 'frequency' key: dict(scheduler=lr_scheduler,
interval='step' or 'epoch', frequency=x)
or want to monitor a custom metric, you can specify these in a dictionary:
.. code-block:: python
{
'scheduler': lr_scheduler,
'interval': 'step' # or 'epoch'
'monitor': 'val_f1',
'frequency': x
}
"""
return Adam(self.parameters(), lr=1e-3)
Expand Down

0 comments on commit 711892a

Please sign in to comment.