Skip to content

Commit

Permalink
Leftovers
Browse files Browse the repository at this point in the history
  • Loading branch information
carmocca committed Feb 23, 2021
1 parent 295a70e commit bf9a7a2
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 5 deletions.
2 changes: 0 additions & 2 deletions docs/source/common/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -300,8 +300,6 @@ override the :meth:`optimizer_step` function.

For example, here step optimizer A every 2 batches and optimizer B every 4 batches

.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.

.. testcode::

def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
Expand Down
3 changes: 0 additions & 3 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -1324,9 +1324,6 @@ def optimizer_step(
By default, Lightning calls ``step()`` and ``zero_grad()`` as shown in the example
once per optimizer.
.. tip:: With ``Trainer(enable_pl_optimizer=True)``, you can use ``optimizer.step()`` directly
and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you.
Warning:
If you are overriding this method, make sure that you pass the ``optimizer_closure`` parameter
to ``optimizer.step()`` function as shown in the examples. This ensures that
Expand Down

0 comments on commit bf9a7a2

Please sign in to comment.