-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Absent class miou fix [WIP] #2892
base: master
Are you sure you want to change the base?
Conversation
Case 1: Perfect predictionimport torch
from torchmetrics.segmentation import MeanIoU
from torchmetrics.functional.segmentation.mean_iou import mean_iou
num_classes = 3
metric_one = MeanIoU(num_classes=num_classes, per_class=False, input_format='index')
metric_two = MeanIoU(num_classes=num_classes, per_class=True, input_format='index')
target = torch.tensor([
[0, 1], # Ground truth: class 0, class 1
[1, 0], # Ground truth: class 1, class 0
[2, 2], # Ground truth: class 2, class 2
])
preds = torch.tensor([
[0, 1], # Predictions: class 0, class 1
[1, 0], # Predictions: class 1, class 0
[2, 2], # Predictions: class 2, class 2
])
metric_one.update(preds, target)
miou_per_class_one = metric_one.compute()
metric_two.update(preds, target)
miou_per_class_two = metric_two.compute()
print(miou_per_class_one)
print(miou_per_class_two) Returns:tensor(1.)
tensor([1., 1., 1.]) Case 2 Perfect prediction but one completely absent classimport torch
from torchmetrics.segmentation import MeanIoU
from torchmetrics.functional.segmentation.mean_iou import mean_iou
num_classes = 4
metric_one = MeanIoU(num_classes=num_classes, per_class=False, input_format='index')
metric_two = MeanIoU(num_classes=num_classes, per_class=True, input_format='index')
target = torch.tensor([
[0, 1], # Ground truth: class 0, class 1
[1, 0], # Ground truth: class 1, class 0
[2, 2], # Ground truth: class 2, class 2
])
preds = torch.tensor([
[0, 1], # Predictions: class 0, class 1
[1, 0], # Predictions: class 1, class 0
[2, 2], # Predictions: class 2, class 2
])
metric_one.update(preds, target)
miou_per_class_one = metric_one.compute()
metric_two.update(preds, target)
miou_per_class_two = metric_two.compute()
print(miou_per_class_one)
print(miou_per_class_two) Returns:tensor(nan)
tensor([1., 1., 1., nan]) Case 3: Completely wrong predictions(Probably same as the old one)import torch
from torchmetrics.segmentation import MeanIoU
num_classes = 3
metric_one = MeanIoU(num_classes=num_classes, per_class=False, input_format='index')
metric_two = MeanIoU(num_classes=num_classes, per_class=True, input_format='index')
target = torch.tensor([
[0, 1], # Ground truth: class 0, class 1
[1, 0], # Ground truth: class 1, class 0
[2, 2], # Ground truth: class 2, class 2
])
preds = torch.tensor([
[1, 2], # Predictions: all wrong
[2, 1], # Predictions: all wrong
[0, 1], # Predictions: all wrong
])
metric_one.update(preds, target)
miou_per_class_one = metric_one.compute()
metric_two.update(preds, target)
miou_per_class_two = metric_two.compute()
print(miou_per_class_one)
print(miou_per_class_two) Returnstensor(0.)
tensor([0., 0., 0.]) |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2892 +/- ##
========================================
- Coverage 69% 35% -34%
========================================
Files 332 332
Lines 18966 18969 +3
========================================
- Hits 13055 6598 -6457
- Misses 5911 12371 +6460 |
While it's marked WIP I still need comments from you @Borda . I posted some cases and wonder how I should proceed |
What does this PR do?
Fixes #2866
This is not the final version. I wanted to discuss the expected outputs for different edge cases and how to deal with them. I haven't included tests for these edge cases because monAI deals with such cases differently than what is proposed in this PR/issue, therefore adding a test and checking it against the monAI value would lead to failing tests. I'll post different inputs and what kind of outputs we should expect in the followup comment.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun? partly 😄
Make sure you had fun coding 🙃
📚 Documentation preview 📚: https://torchmetrics--2892.org.readthedocs.build/en/2892/