Skip to content

Commit

Permalink
args
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Feb 23, 2021
1 parent 295a70e commit dc7e230
Show file tree
Hide file tree
Showing 4 changed files with 0 additions and 7 deletions.
2 changes: 0 additions & 2 deletions docs/source/common/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -300,8 +300,6 @@ override the :meth:`optimizer_step` function.

For example, here step optimizer A every 2 batches and optimizer B every 4 batches

.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.

.. testcode::

def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):
Expand Down
3 changes: 0 additions & 3 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -1324,9 +1324,6 @@ def optimizer_step(
By default, Lightning calls ``step()`` and ``zero_grad()`` as shown in the example
once per optimizer.
.. tip:: With ``Trainer(enable_pl_optimizer=True)``, you can use ``optimizer.step()`` directly
and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you.
Warning:
If you are overriding this method, make sure that you pass the ``optimizer_closure`` parameter
to ``optimizer.step()`` function as shown in the examples. This ensures that
Expand Down
1 change: 0 additions & 1 deletion tests/plugins/test_rpc_sequential_plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ def test_rpc_sequential_plugin_manual(tmpdir, args=None):
gpus=2,
distributed_backend="ddp",
plugins=[RPCSequentialPlugin(balance=[2, 1], rpc_timeout_sec=5 * 60)],
enable_pl_optimizer=True,
)

trainer.fit(model)
Expand Down
1 change: 0 additions & 1 deletion tests/utilities/test_all_gather_grad.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,6 @@ def training_epoch_end(self, outputs) -> None:
max_epochs=1,
log_every_n_steps=1,
accumulate_grad_batches=2,
enable_pl_optimizer=True,
gpus=2,
accelerator="ddp",
)
Expand Down

0 comments on commit dc7e230

Please sign in to comment.