Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Top-K precision/recall multilabel metrics for ranking task #467

Open
vfdev-5 opened this issue Mar 31, 2019 · 10 comments
Open

Top-K precision/recall multilabel metrics for ranking task #467

vfdev-5 opened this issue Mar 31, 2019 · 10 comments

Comments

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Mar 31, 2019

Following the discussion from #466 (comment) it would be nice to have such metric in Ignite.

In the context of a multilabel task, compute a top-k precision/recall per label (treating all labels independently).

@Data-drone
Copy link

any update on this?

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Sep 10, 2019

cc @anmolsjoshi

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Sep 10, 2019

@Data-drone in case, maybe you could be interested in this temporary solution:
#513 (comment)

@Tanmay06
Copy link

Tanmay06 commented Oct 6, 2020

Hi! I would like to work on this. Would you please assign this to me ?

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Oct 6, 2020

@Tanmay06 sure !

@vfdev-5 vfdev-5 added PyDataGlobal PyData Global 2020 Sprint and removed Hacktoberfest labels Oct 31, 2020
@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 4, 2020

@Tanmay06 any updates on this issue ?

@Tanmay06
Copy link

Tanmay06 commented Nov 6, 2020

I've almost completed it, but I'm getting some bugs. I'll fix those and mostly raise a PR by this weekend.

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Nov 6, 2020

Sounds good! Do not hesitate to send a draft PR such that we could iterate faster.

@vfdev-5 vfdev-5 removed the PyDataGlobal PyData Global 2020 Sprint label Dec 14, 2020
@vfdev-5 vfdev-5 added the module: metrics Metrics module label Jan 18, 2021
@Tanmay06 Tanmay06 removed their assignment Dec 30, 2021
@julien-blanchon
Copy link

julien-blanchon commented Mar 17, 2023

Just to be sure I understand well this issues.

1 - By multilabel (in the context of classification), you means that each data point can belong to multiple classes at the same time. For example, in an image classification task, a single image may contain multiple objects, and the model needs to predict all the objects present in the image.

2 - Following the reference given by @RoyHirsch in #466, this look like MultilabelPrecision [1] and MultilabelRecall [2] of torchmetrics.classification.

3 - What's blocking with #516, Multilabel Precision and Recall is expected to come in this PR ?

1

2

@vfdev-5
Copy link
Collaborator Author

vfdev-5 commented Mar 17, 2023

By multilabel (in the context of classification), you means that each data point can belong to multiple classes at the same time.

Yes, exactly, non-exclusive class labels, similar to tags. For 3 classes, ground truth can be y=[0, 1, 1] or y=[1, 1, 1] or y=[0, 0, 0] etc.

2 - Following the reference given by @RoyHirsch in #466, this look like MultilabelPrecision [1] and MultilabelRecall [2] of torchmetrics.classification.

I do not know what they are doing those classes MultilabelPrecision [1] and MultilabelRecall [2].
I have to reread https://arxiv.org/pdf/1312.4894.pdf to refresh the idea of what we wanted to compute...

What's blocking with #516, Multilabel Precision and Recall is expected to come in this PR ?

This PR is rather old and adding new arg to the API is not a good idea, IMO. Maybe, introducing an arg like average (see Precision, Recall) would make more sense ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants