You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While micro and macro AUROC play well with each other and macro AveragePrecision, micro AveragePrecision will not be merged into the same compute group.
This is due to AveragePrecision flattening its predictions and targets in the update() call (see here) while AUROC flattens only in its compute() (see here). Because of that the shapes don't align and the compute group merge will fail.
🐛 Bug
While micro and macro
AUROC
play well with each other and macroAveragePrecision
, microAveragePrecision
will not be merged into the same compute group.This is due to AveragePrecision flattening its predictions and targets in the
update()
call (see here) while AUROC flattens only in itscompute()
(see here). Because of that the shapes don't align and the compute group merge will fail.To Reproduce
Code sample
Expected behavior
Micro
AveragePrecision
shouldn't flatten during update but during compute, which would allow it to have its state shared with e.g. AUROC and itself.Environment
conda
,pip
, build from source):0.9.1
,pip
3.8.12
&1.11.0
FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
Additional context
This is especially hurting for AveragePrecision and the likes, since they store all predictions and targets in their state.
The text was updated successfully, but these errors were encountered: