-
-
Notifications
You must be signed in to change notification settings - Fork 633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Top-K precision/recall multilabel metrics for ranking task #467
Comments
any update on this? |
cc @anmolsjoshi |
@Data-drone in case, maybe you could be interested in this temporary solution: |
Hi! I would like to work on this. Would you please assign this to me ? |
@Tanmay06 sure ! |
@Tanmay06 any updates on this issue ? |
I've almost completed it, but I'm getting some bugs. I'll fix those and mostly raise a PR by this weekend. |
Sounds good! Do not hesitate to send a draft PR such that we could iterate faster. |
Just to be sure I understand well this issues. 1 - By multilabel (in the context of classification), you means that each data point can belong to multiple classes at the same time. For example, in an image classification task, a single image may contain multiple objects, and the model needs to predict all the objects present in the image. 2 - Following the reference given by @RoyHirsch in #466, this look like 3 - What's blocking with #516, Multilabel Precision and Recall is expected to come in this PR ? |
Yes, exactly, non-exclusive class labels, similar to tags. For 3 classes, ground truth can be
I do not know what they are doing those classes MultilabelPrecision [1] and MultilabelRecall [2].
This PR is rather old and adding new arg to the API is not a good idea, IMO. Maybe, introducing an arg like |
Following the discussion from #466 (comment) it would be nice to have such metric in Ignite.
In the context of a multilabel task, compute a top-k precision/recall per label (treating all labels independently).
The text was updated successfully, but these errors were encountered: