-
Notifications
You must be signed in to change notification settings - Fork 748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change 'dice_coefficient' to calculate Dice score rather than pixel accuracy #51
base: master
Are you sure you want to change the base?
Conversation
Sweet that looks nice. I will play arround a little bit to get a feelling but at first sight this looks good. Thanks a lot! |
@ethanbb I looked at your PR and quickly ran into some issues. I was using the
|
Sorry, forgot about this - not sure what's causing the NaN(s). We ended up using a different loss function entirely (computing cross entropy for pixels in each gt class separately and then averaging across classes), so I haven't tested this version extensively. I still think this is closer to a real Dice score than the original, but the "true" Dice score that we used for evaluation treats predictions as binary rather than adding up the probability of each class for each pixel (but this way can't be used as a loss function since it doesn't have a smooth gradient). |
Hi @ethanbb , thanks for your pull request. I have a question, in your code, the following line
why not
|
@glhfgg1024 Hey, it's been a while. I'm not sure whether summing or meaning the individual intersections-over-union gives a more "canonical" Dice coefficient, but there should be no difference in training since they just differ by a constant factor. |
Hi @ethanbb, thanks a lot for your kind comment! |
Be careful with this calculation... Intersection over union (IoU) is not the same as the dice coefficient. Besides the 2 coefficient present in the dice coefficient (which is accounted for in the code), the denominator is also different. For IoU, the denominator is simply |A u B| whereas for the dice coefficient it is |A| + |B|. The main difference in this is that the intersect is double counted in the dice coefficient as both the |A| and |B| term will include the intersect, whereas in IoU, |A u B| only includes the intersect once. To fix this, we could change the denominator to either |A| + |B| or to (union + intersect) as those are both equivalent. Since the dice coefficient is usually done with |A| + |B|, I would personally suggest that for the sake of clarity, but bother are mathematically correct. A further suggestion would be to add IoU as a third option for the cost parameter as this may be a function others would like to use in the future. |
@wkeithvan Good point. Actually, I believe that what is currently called |
I agree... I was just going through and thinking the same thing. I would rename the Perhaps this?
|
Make sure to add to the documentation/comment in line 207 adding that intersection over union as "iou" is a valid choice for the cost function. Otherwise looks good. |
First of all thanks for your contribution. I just found some time to have a look. I noticed that there are syntax errors in the commit. |
Sorry about the syntax errors - I don't use Python often. I'm still not sure about the NaN problem. |
It's running, although I'm getting that black output again... (#182) |
I am a student who's relatively new to the field, but:
I was inspecting this code for use in a project, and I noticed that the dice_coefficient calculation sums the prediction-y intersection over all dimensions, including class, before dividing by the "union." This seems incorrect, because the Dice score should be calculated by taking the intersection/union ratio separately for each class, and then summing the results (see e.g. the first answer here: https://stats.stackexchange.com/questions/255465/accuracy-vs-jaccard-for-multiclass-problem). This weights the predictions properly according to the frequency of the class in the image.
In contrast, the current code weights each prediction by the scalar "union," which will actually always equal 2 * height * width * batch because of the softmax. This is equivalent to pixel accuracy. I've changed the code to wait to reduce across the class/channel dimension until after the division.
If this is wrong I apologize, but I thought this was serious enough that I should bring it to attention.