-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set progressbar refresh rate in Google Colab #5516
Conversation
so refresh_rate is 1 now just because of the colab thing? The warning is raised during Trainer init itself, so isn't it better than setting it to 1 in other cases? |
no, the other way around. It is always one (as it was before) except when in colab and except the user sets it to the value they want. |
ok my bad... got confused it with |
Codecov Report
@@ Coverage Diff @@
## release/1.2-dev #5516 +/- ##
================================================
+ Coverage 89% 93% +4%
================================================
Files 153 153
Lines 10813 11088 +275
================================================
+ Hits 9615 10261 +646
+ Misses 1198 827 -371 |
Co-authored-by: Nicki Skafte <[email protected]>
pytorch_lightning/trainer/trainer.py
Outdated
@@ -219,7 +219,7 @@ def __init__( | |||
process_position: orders the progress bar when running multiple models on same machine. | |||
|
|||
progress_bar_refresh_rate: How often to refresh progress bar (in steps). Value ``0`` disables progress bar. | |||
Ignored when a custom callback is passed to :paramref:`~Trainer.callbacks`. | |||
Ignored when a custom progress bar is passed to :paramref:`~Trainer.callbacks`. Default: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is confusing as you say default but in code is none, so maybe tell that it is set internally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what should I do?
we have other trainer args set as None and they are internally set to the defaults
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but usually this is only done for one of the following reasons:
1.) We need to check for some type before setting the default (for example to raise deprecation warnings)
2.) They are mutable otherwise.
3.) They depend on other parameters
4.) They are really hard to parse/set correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general the 2) is most common
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, here we are in case 3, because the default depends on an environment variable.
Then I will just update the docstring, or do you request other changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated docstring, do you like it better?
UserWarning | ||
) | ||
def configure_progress_bar(self, refresh_rate=None, process_position=0): | ||
if os.getenv('COLAB_GPU') and refresh_rate is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this env always set for colab? also on TPU?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's '0'
otherwise
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rohitgr7 not according to my quick testing.
https://colab.research.google.com/drive/1_W9Dk-cxbBZmg6R2tNrqu3gzpjcEPjdb?usp=sharing
I get True back on CPU, GPU, and TPU even after restarting the runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's changed now. Last time I checked it was '1' for GPU and '0' for others.
pytorch_lightning/trainer/trainer.py
Outdated
@@ -219,7 +219,7 @@ def __init__( | |||
process_position: orders the progress bar when running multiple models on same machine. | |||
|
|||
progress_bar_refresh_rate: How often to refresh progress bar (in steps). Value ``0`` disables progress bar. | |||
Ignored when a custom callback is passed to :paramref:`~Trainer.callbacks`. | |||
Ignored when a custom progress bar is passed to :paramref:`~Trainer.callbacks`. Default: 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but usually this is only done for one of the following reasons:
1.) We need to check for some type before setting the default (for example to raise deprecation warnings)
2.) They are mutable otherwise.
3.) They depend on other parameters
4.) They are really hard to parse/set correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
What does this PR do?
Fixes #5515
Colab example based on this branch:
https://colab.research.google.com/drive/1CX9WOkZPB5HtqOQD0_rHPeaOa9Cca-1g?usp=sharing
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list: