-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Description
Description & Motivation
In previous Lightning versions, it was possible to use multiple optimizers with automatic optimization. Since lightning>=2.0.0 this is no longer possible (i.e. the argument optimizer_idx of the training step disappeared). The problem is not that we have to do the optimization manually, but that we cannot return anything from the {training,validation,test}_step method. Indeed, I have been using the dict return type a lot to put the logging logic inside Lightning hooks like on_train_batch_end which takes the outputs of training_step as input.
Pitch
I would like to be able to return a dict without the key 'loss' from {training,validation,test}_step when using manual optimization.
Alternatives
The only alternative I can find is to move my logging code that is written in on_train_batch_end, on_validation_batch_end, on_test_batch_end of an AbstractLightningModule into the GANLightningModule. The issue is that it generates duplicate code because I have other LightningModules that also inherit from AbstractLightningModule and use the same logging hooks.
Additional context
PS: Is there a typo in this line of the documentation ? Shouldn't it be " Skip to the next batch. This is only supported for automatic manual optimization."
cc @Borda