-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clean up metrics code, use MetricCollection #1529
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1529 +/- ##
==========================================
- Coverage 91.12% 91.08% -0.05%
==========================================
Files 113 113
Lines 8996 9015 +19
==========================================
+ Hits 8198 8211 +13
- Misses 798 804 +6
Continue to review full report at Codecov.
|
@PierreBoyeau @adamgayoso updated and addressed my own comments. Condensed code by relying on this part of the metricscollection code https://github.com/PyTorchLightning/metrics/blob/11641b8558e713649743709d78d3ab363bcfc9aa/torchmetrics/metric.py#L681-L682 |
Also noticed GIMVI doesnt use this and seems hard to adapt to it. Does it matter? |
scvi/train/_metrics.py
Outdated
kl_local_sum: torch.Tensor, | ||
kl_global: torch.Tensor, | ||
n_obs_minibatch: int, | ||
**kwargs, | ||
): | ||
"""Updates all metrics.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be good to explain somehwere that in a metric collection that arguments get passed to each individual metric
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR looks good to me, also did a basic sanity check here to double-check that the metrics overall make sense.
* clean up metrics code * condense elbo metric code with filtering inside the update fn * codacy * address comments Co-authored-by: Justin Hong <[email protected]>
By calling .update() explicitly, it should speed things up according to their documentation as we don't need the per step values.
This also fixes an issue where
n_obs_training
might not reflect the number of cells in an epoch, as in the large scale training params where we restrict an epoch to 20 minibatches