-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix segmentation.MeanIoU
#2698
Fix segmentation.MeanIoU
#2698
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2698 +/- ##
=======================================
- Coverage 69% 69% -0%
=======================================
Files 329 316 -13
Lines 18071 17909 -162
=======================================
- Hits 12497 12335 -162
Misses 5574 5574 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great work, can we please add a test to prevent this issue in the future?
I have also noticed that names and descriptions in test for Should I create a new issue? torchmetrics/tests/unittests/segmentation/test_generalized_dice_score.py Lines 66 to 108 in 24e99c5
|
that was a typo, fixing it in #2709 |
Added generalized testing that |
I suggest adding a test case to verify the metric aggregation behavior: the aggregated metric of a batch with 2*N predictions should match the aggregated metrics from two separate updates, each with N predictions. Are there any metrics that are not expected to behave this way? |
- use sum reduce function for score - add state `num_batches` to keep number of processed batches - add increment of `num_batches` in every `update` call - in `compute` return sum of scores divided by number of processed batches
87f2d8e
to
86ca3f0
Compare
86ca3f0
to
640c664
Compare
- use sum reduce function for score - add state `num_batches` to keep number of processed batches - add increment of `num_batches` in every `update` call - in `compute` return sum of scores divided by number of processed batches --------- Co-authored-by: Nicki Skafte <[email protected]> (cherry picked from commit cb1ab37)
- use sum reduce function for score - add state `num_batches` to keep number of processed batches - add increment of `num_batches` in every `update` call - in `compute` return sum of scores divided by number of processed batches --------- Co-authored-by: Nicki Skafte <[email protected]> (cherry picked from commit cb1ab37)
What does this PR do?
Fixes #2558
Fixes a few typos
This PR fixes the
segmentation.MeanIoU
metric.Issues
self.update
, theself.score
is accumulated with each call. Then whenself.compute
is called, accumulated metric is returnedself.forward
, metric works correctIt is happens because when
MeanIoU
is updated viaself.forward
, it callsself._reduce_states
method, which updates score using the following formula:reduced = ((self._update_count - 1) * global_state + local_state).float() / self._update_count
where
global_state
is the score accumulated over previous steps andlocal_state
is the score on current batch.Solution
To solve this issue:
self.score
num_batches
to keep number of processed batchesnum_batches
in everyself.update
callself.compute
return sum of scores divided by number of processed batches📚 Documentation preview 📚: https://torchmetrics--2698.org.readthedocs.build/en/2698/