Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SpecificityAtSensitivity Metric #1432

Merged
merged 47 commits into from
Jan 27, 2023
Merged

Add SpecificityAtSensitivity Metric #1432

merged 47 commits into from
Jan 27, 2023

Conversation

shenoynikhil
Copy link
Contributor

@shenoynikhil shenoynikhil commented Jan 7, 2023

What does this PR do?

Partially Fixes #971. Introduces the SpecificityAtSensitivity Metric. The computation is as follows,

  1. Compute false positive rate, true positive rate at different thresholds (Using the roc curve)
  2. true positive rate is sensitivity while 1 - false positive rate is specificity
  3. Consider all thresholds where sensitivity >= min_sensitivity (this is provided by the user)
  4. Calculate the maximum possible specificity at all the thresholds

PseudoCode

from torchmetrics.functional.classification.roc import _binary_roc_compute

preds, target, thresholds = _binary_precision_recall_curve_format(preds, target, thresholds, ignore_index)
state = _binary_precision_recall_curve_update(preds, target, thresholds)
fpr, sensitivity, thresholds = _binary_roc_compute(state, thresholds, pos_label)
specificity = 1 - fpr

# actual computation of max specificity at min possible sensitivity
max_spec, _, best_threshold = max(
    [(sp, sn, thresh) for sp, sn, thresh in zip(specificity, sensitivity, thresholds) if sn >= min_sensitivity]
)

return max_spec, best_threshold

Before submitting

  • Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure to update the docs?
  • Did you write any new necessary tests?

PR review

Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

Did you have fun?

Make sure you had fun coding 🙃

@shenoynikhil shenoynikhil changed the title Added SpecificityAtSensitivity Metrics Add SpecificityAtSensitivity Metrics Jan 7, 2023
@shenoynikhil shenoynikhil marked this pull request as ready for review January 7, 2023 07:13
@shenoynikhil shenoynikhil marked this pull request as draft January 7, 2023 08:12
@shenoynikhil shenoynikhil marked this pull request as ready for review January 7, 2023 21:44
@shenoynikhil
Copy link
Contributor Author

@Borda @stancld @SkafteNicki If you can provide some feedback on the edge cases (all positive and all negative), that would be great. I just wrote the edge case based on the recall_at_fixed_precision metric.

@codecov
Copy link

codecov bot commented Jan 7, 2023

Codecov Report

Merging #1432 (d9e1fa3) into master (aab3a3b) will decrease coverage by 38%.
The diff coverage is 86%.

Additional details and impacted files
@@           Coverage Diff            @@
##           master   #1432     +/-   ##
========================================
- Coverage      90%     51%    -38%     
========================================
  Files         211     213      +2     
  Lines       10839   10986    +147     
========================================
- Hits         9717    5657   -4060     
- Misses       1122    5329   +4207     

@shenoynikhil shenoynikhil changed the title Add SpecificityAtSensitivity Metrics Add SpecificityAtSensitivity Metric Jan 23, 2023
@mergify mergify bot added the ready label Jan 26, 2023
@justusschock justusschock enabled auto-merge (squash) January 27, 2023 14:12
@justusschock justusschock merged commit 3f93c72 into Lightning-AI:master Jan 27, 2023
@shenoynikhil shenoynikhil deleted the specificity-at-sensitivity branch January 27, 2023 21:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add classification metrics: SensitivityAtSpecificity and SpecificityAtSensitivity
3 participants