DDP Accumulate predictions from all processes across all nodes to compute accuracy #461
-
Hi. I'm using pytorch-lightning and would like to calculate the classification accuracy (#right / #total) where #right = correct predictions from all processes, and #predictions = total no. of predictions when using DDP. I do NOT want to compute accuracies from individual processes and then average them. I want to get the total right predictions and then finally compute the accuracy. Currently I am doing this.
However, when I compare the output of self.accuracy.compute() to manual computation (by outputting right and total no. preds from the individual processes) they are not the same. What is the correct way to get my expected output? As, I understand it Thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, self.log('acc', self.accuracy) # log the object not the return value however, the easiest thing for you to do is to manually just reset the metric by adding |
Beta Was this translation helpful? Give feedback.
Hi,
The docs state that the metric is reset IF the metric is directly logged using
self.log
, something like this:however, the easiest thing for you to do is to manually just reset the metric by adding
self.accuracy.reset()
as the last statement to yourvalidation_epoch_end
method.