Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix progress bar logging of metrics if compute_on_step=False #5895

Merged
merged 1 commit into from
Feb 11, 2021

Conversation

teddykoker
Copy link
Contributor

What does this PR do?

Quick fix, this issue from slack:

Hi, when using the Metrics API, I’m trying to set the compute_on_step=False when instantiating my ROC and AveragePrecision metrics, because I want to compute these over the entire epoch (as these metrics aren’t very useful batch-wise).
So in the training/val/test steps, I do something like self.roc(preds, truth), but I don’t log this in the step. Instead, at the epoch end, I compute auc() using the output of self.roc.compute(), which I then log, etc.
However, if I set compute_on_step=False , then PL throws the following error at the start of Epoch 0 (validation sanity epoch runs fine):
result[dl_key] = self[k]._forward_cache.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
When looking at PL’s source code, it appears that if compute_on_step=False , then the _forward_cache property is set to and remains None, and is only updated to a non-None value if compute_on_step=True . Is this the intended behavior, or is this a bug? If this is intended, then how would the compute_on_step flag ever be able to set to False?
Thanks for the help!

cc, @tchaton I'm not sure how I feel about using the _forward_cache attribute
of the metric; I never really intended for that to be used outside the metric
class so there could be unintended consequences down the road. We might want to
consider a more robust solution here.

@codecov
Copy link

codecov bot commented Feb 10, 2021

Codecov Report

Merging #5895 (28e8350) into release/1.2-dev (a96137c) will decrease coverage by 0%.
The diff coverage is 100%.

@@               Coverage Diff                @@
##           release/1.2-dev   #5895    +/-   ##
================================================
- Coverage               88%     88%    -0%     
================================================
  Files                  181     181            
  Lines                12929   12760   -169     
================================================
- Hits                 11426   11243   -183     
- Misses                1503    1517    +14     

@tchaton
Copy link
Contributor

tchaton commented Feb 10, 2021

What does this PR do?

Quick fix, this issue from slack:

Hi, when using the Metrics API, I’m trying to set the compute_on_step=False when instantiating my ROC and AveragePrecision metrics, because I want to compute these over the entire epoch (as these metrics aren’t very useful batch-wise).
So in the training/val/test steps, I do something like self.roc(preds, truth), but I don’t log this in the step. Instead, at the epoch end, I compute auc() using the output of self.roc.compute(), which I then log, etc.
However, if I set compute_on_step=False , then PL throws the following error at the start of Epoch 0 (validation sanity epoch runs fine):
result[dl_key] = self[k]._forward_cache.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
When looking at PL’s source code, it appears that if compute_on_step=False , then the _forward_cache property is set to and remains None, and is only updated to a non-None value if compute_on_step=True . Is this the intended behavior, or is this a bug? If this is intended, then how would the compute_on_step flag ever be able to set to False?
Thanks for the help!

cc, @tchaton I'm not sure how I feel about using the _forward_cache attribute
of the metric; I never really intended for that to be used outside the metric
class so there could be unintended consequences down the road. We might want to
consider a more robust solution here.

I agree. This solution is prone to errors in the future.

@Borda Borda added the bug Something isn't working label Feb 10, 2021
@Borda Borda added this to the 1.2 milestone Feb 10, 2021
@Borda Borda added the ready PRs ready to be merged label Feb 10, 2021
@Borda Borda enabled auto-merge (squash) February 10, 2021 14:41
Base automatically changed from release/1.2-dev to master February 11, 2021 14:32
@Borda Borda merged commit 44958ad into master Feb 11, 2021
@Borda Borda deleted the log_foward_cache_fix branch February 11, 2021 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working ready PRs ready to be merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants