-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update hooks pseudocode #7713
Update hooks pseudocode #7713
Conversation
Codecov Report
@@ Coverage Diff @@
## master #7713 +/- ##
======================================
- Coverage 93% 92% -0%
======================================
Files 199 199
Lines 12971 12960 -11
======================================
- Hits 12001 11952 -49
- Misses 970 1008 +38 |
@@ -1064,7 +1064,9 @@ override :meth:`pytorch_lightning.core.LightningModule.tbptt_split_batch`: | |||
|
|||
Hooks | |||
^^^^^ | |||
This is the pseudocode to describe how all the hooks are called during a call to ``.fit()``. | |||
This is the pseudocode to describe the structure of :meth:`~pytorch_lightning.trainer.Trainer.fit`. | |||
The inputs and outputs of each function are not represented for simplicity. Please check each function's API reference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer keeping outputs
here since its very helpful to distinguish some hooks (on_train_epoch_end
vs training_epoch_end
) and since inputs/outputs across hooks are closely related to the order of execution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed it because I find it weird having the argument for training_step
but not all the other functions that take a batch
argument or an outputs
argument.
So either I would add the batch argument for all functions or for none of them.
But adding it for all decreases the simplicity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's fair. should we have a separate section to discuss how data is passed across hooks (not for this PR)?
it can clarify #7723
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. If you have any ideas on how to clear it do not hesitate to send a patch or open an issue
@@ -1064,7 +1064,9 @@ override :meth:`pytorch_lightning.core.LightningModule.tbptt_split_batch`: | |||
|
|||
Hooks | |||
^^^^^ | |||
This is the pseudocode to describe how all the hooks are called during a call to ``.fit()``. | |||
This is the pseudocode to describe the structure of :meth:`~pytorch_lightning.trainer.Trainer.fit`. | |||
The inputs and outputs of each function are not represented for simplicity. Please check each function's API reference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's fair. should we have a separate section to discuss how data is passed across hooks (not for this PR)?
it can clarify #7723
What does this PR do?
See title.
I removed the inputs/outputs because IMO they decrease the readability if we want to do them correctly. Also because the value in this snippet is the hook order, not the inputs/arguments - we have the API reference for those.
Also included:
Before submitting
PR review