You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
COCO mAP (mean average precision) is a widely used evaluation metric for object detection models, especially for the COCO dataset. Unlike the PASCAL VOC evaluation, which has a single IoU (Intersection over Union) threshold for assessing the detection model, the COCO mAP evaluator averages the mAP of 80 classes over 10 IoU thresholds from 0.5 to 0.95 with a step size of 0.05 (AP@[0.5:0.05:0.95]). This is to avoid the bias that a single threshold may induce in the evaluation metric and to provide a more complete analysis of the detection model.
Motivation, pitch
COCO mAP has an official API, which lacks maintenance and has been outdated.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
🚀 The feature
COCO mAP
COCO mAP (mean average precision) is a widely used evaluation metric for object detection models, especially for the COCO dataset. Unlike the PASCAL VOC evaluation, which has a single IoU (Intersection over Union) threshold for assessing the detection model, the COCO mAP evaluator averages the mAP of 80 classes over 10 IoU thresholds from 0.5 to 0.95 with a step size of 0.05 (AP@[0.5:0.05:0.95]). This is to avoid the bias that a single threshold may induce in the evaluation metric and to provide a more complete analysis of the detection model.
Motivation, pitch
COCO mAP has an official API, which lacks maintenance and has been outdated.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: