Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy no longer works with logits #391

Closed
shenberg opened this issue Jul 20, 2021 · 4 comments
Closed

Accuracy no longer works with logits #391

shenberg opened this issue Jul 20, 2021 · 4 comments
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Milestone

Comments

@shenberg
Copy link

🐛 Bug

Thresholds are now limited to the range (0,1), so I cannot set a threshold that makes sense for logits (e.g. 0)

To Reproduce

Steps to reproduce the behavior:

Accuracy(threshold=0.)

Code sample

import torchmetrics
torchmetrics.Accuracy(threshold=0.0)

Expected behavior

A threshold of 0 should be valid for binary and multilabel classification.

Additional context

It makes sense to accept logits into accuracy, as often the training step has the logit->probability and probability->loss calculation in one layer for numerical reasons (e.g. nn.BCEWithLogitsLoss), and forcing recalculation of the sigmoid is unnecessary, as it's monotonic.

The documentation for Accuracy states:

threshold (float) – Threshold for transforming probability or logit predictions to binary (0,1) predictions, in the case of binary or multi-label inputs. Default value of 0.5 corresponds to input being probabilities.

@shenberg shenberg added bug / fix Something isn't working help wanted Extra attention is needed labels Jul 20, 2021
@github-actions
Copy link

Hi! thanks for your contribution!, great first issue!

@Borda
Copy link
Member

Borda commented Jul 20, 2021

@shenberg what version are you using? this shall be fixed on master now #351
feel free to reopen if you still have this issue... 🐰

@Borda Borda closed this as completed Jul 20, 2021
@shenberg
Copy link
Author

shenberg commented Jul 20, 2021

@Borda, unfortunately, Accuracy has its own check, see accuracy.py, lines 208-209 on current commit - the PR you linked to fixed the general case in stats_scores.py but left this specific class non-functional as it has a bespoke test.

@cnut1648
Copy link

Agreed, can confirm that in the latest version 0.4.1 the threshold can't be set to 0, which makes it unable to handle output logit of binary classification model (where there is no sigmoid)

@Borda Borda added this to the v0.5 milestone Aug 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug / fix Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants