-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs return description of binary_confusion_matrix incorrect #183
Comments
Hi @ChrisBog-MV Thanks for pointing this out! @JKSenthil I went through and looked at this. The issue is that we are following sklearn's convention of using
which is not standard for binary confusion matrices. I would suggest we flip the rows and columns here with a bit of slicing
tests and docstring examples will need to be updated as well. But basically we will then return
which is standard. |
Hi! +1. Also, I made an example that illustrates the problem with expected unique counts for each category (i.e. FP, FN, etc.)
PS: a |
Not sure if I'm being daft here but I think this line is incorrect:
torcheval/torcheval/metrics/functional/classification/confusion_matrix.py
Line 23 in e0444c6
It says that the returned tensor contains the values:
But if you look at the examples, it shows (e.g.):
Which for those inputs, I count:
So I believe that would mean the actual tensor being returned is either
[[fp, tn], [fn, tp]]
or[[tn, fp], [fn, tp]]
. From my own experiments, I'm pretty sure it's the latter.Been scratching my head all morning about why my results look wrong and I think this is why.
The text was updated successfully, but these errors were encountered: