-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue 301: Address coverage gaps in bias.R #305
Conversation
…same result in the limit
@nikosbosse Unless I am very confused |
Not that you've asked me but I agree - worth adding tests for |
… just when there is a column we need (i.e. in summarise_score)
…make them work with more complete testing of inputs
Addressed in ae4a857 |
To get the vignette work I have had to hack around the point forecasts only case as this depends on hack: if (is.na(range[1]) && !any(range[-1] == 0)) {
range[1] <- 0
} |
Codecov Report
@@ Coverage Diff @@
## main #305 +/- ##
==========================================
+ Coverage 89.68% 90.78% +1.09%
==========================================
Files 22 22
Lines 1377 1378 +1
==========================================
+ Hits 1235 1251 +16
+ Misses 142 127 -15
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
I think this should be ready for review now. To get this working with some of the edge cases (point forecasts only for example) I have had to insert some hacks. 😆 turned into a lot of work for a 1% increase in coverage! This PR has made me think more about how we go about software dev on this package. It might be useful to sit down and spend some time talking through it and potentially draw up some agreed practices + a roadmap for changes we could make to reduce technical debt. |
🙏🏻 📿 🙏🏻 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall really nice, thank you! Left only a few comments. I made some suggested changes updating the docs (the Latex points Seb mentioned, but haven't made suggested edits for all of them)
It's an important 1% :D Thanks! |
Co-authored-by: Nikos Bosse <[email protected]>
Co-authored-by: Nikos Bosse <[email protected]>
This PR address coverage gaps in
bias.R
as part of addressing #301.Unfortunately in doing so it appears to have found a few issues. These are:
bias_sample()
: Could not detect integer forecasts correctly and so always used continuous forecastsbias_range()
was an effective duplicate ofbias_quantile()
. It was only tested by checking for equivalence against itself (which of course it passed).get_prediction_type()
did not fail appropriately when supplied with insufficient data to evaluate the prediction. When this occurred it defaulted a continuous prediction.I have addressed these issues here but it has required a few internal changes and one potentially breaking change.
See @sbfnk point that the documentation for
bias_range()
andbias_quantile()
is out of sync. I have not addressed this here - see #309.In the README I see different values for bias despite this being a quantile forecast and so not impacted by the bug identified above. As- Fixed by @nikosbosse.score_quantile()
callsbias_range()
this could indicate a few things. A bug in the interaction betweenbias_range()
andbias_quantile()
. A bug inbias_quantile()
that is just coming to light, that has newly been introduced, or thatbias_range()
had a bug. Will look at this in more detail.I have fixed linting issues where changing files has caused them to be flagged.