You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Where relevant_num will be the number of relevant entries in the answer of length R, but not the total number of relevant entries. relevant_num will be always less or equal to R
It is not possible to cheat the AP by tweaking the size of the returned ranked list. AP is the area below the precision-recall curve which plots precision as a function of recall, where recall is the number of returned positives relative to the total number of positives that exist in the ground truth, not relative to the number of positives in the returned list. So if you crop the list, all you are doing is that you are cropping the precision-recall curve and ignoring to plot its tail.
And:
Your confusion might be related to the way some popular function, such as VLFeat's vl_pr compute the precision-recall curves as they assume that you've provided them the entire ranked list and therefore compute the total number of positives in the ground truth by just looking at the ranked list instead of the ground truth itself. So if you used vl_pr naively on cropped lists you could indeed cheat it, but that would be an invalid computation.
The number you divide by is the number of points possible. This is the lesser of ten (the most you can predict) and the number of actual correct answers that exist.
Am I missing something or your code is not correct? It is true that many other hashing papers compute MAP the same way.
The text was updated successfully, but these errors were encountered:
In here:
https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L51
You compute:
Where
relevant_num
will be the number of relevant entries in the answer of lengthR
, but not the total number of relevant entries.relevant_num
will be always less or equal toR
Here https://github.com/benhamner/Metrics/blob/master/Python/ml_metrics/average_precision.py#L39 it is computed differently. There, divider is
min(total_number_of_relevant, k)
.Also see discussion here: https://stackoverflow.com/questions/40906671/confusion-about-mean-average-precision
And:
Also here is an explanation of MAP@k: https://www.kaggle.com/c/FacebookRecruiting/discussion/2002
Am I missing something or your code is not correct? It is true that many other hashing papers compute MAP the same way.
The text was updated successfully, but these errors were encountered: