-
Notifications
You must be signed in to change notification settings - Fork 288
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FIX] results management and visualisation with missing test data #465
[FIX] results management and visualisation with missing test data #465
Conversation
Codecov Report
@@ Coverage Diff @@
## development #465 +/- ##
================================================
+ Coverage 64.65% 85.48% +20.82%
================================================
Files 231 231
Lines 16304 16351 +47
Branches 3009 3028 +19
================================================
+ Hits 10542 13977 +3435
+ Misses 4714 1535 -3179
+ Partials 1048 839 -209
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix seem to resolve the issue and both test cases added assert correct functionality. Some minor comments on my part.
autoPyTorch/utils/results_manager.py
Outdated
@@ -28,6 +28,9 @@ | |||
] | |||
|
|||
|
|||
OPTIONAL_INFERENCE_CHOICES = ('test',) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to transfer this to constants.py
?
autoPyTorch/utils/results_manager.py
Outdated
Checks if the data is missing for each optional inference choice and | ||
sets the scores for that inference choice to all None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to also add the case of score == metric._worst_possible_result
in the docstring:
Checks if the data is missing or if all scores are equal to the worst possible result for each optional inference choice and sets the scores for that inference choice to all None.
* [FIX] Documentation and docker workflow file (#449) * fixes to documentation and docker * fix to docker * Apply suggestions from code review * add change log for release (#450) * [FIX] release docs (#452) * Release 0.2 * Release 0.2.0 * fix docs new line * [FIX] ADD forecasting init design to pip data files (#459) * add forecasting_init.json to data files under setup * avoid undefined reference in scale_value * checks for time series dataset split (#464) * checks for time series dataset split * maint * Update autoPyTorch/datasets/time_series_dataset.py Co-authored-by: Ravin Kohli <[email protected]> Co-authored-by: Ravin Kohli <[email protected]> * [FIX] Numerical stability scaling for timeseries forecasting tasks (#467) * resolve rebase conflict * add checks for scaling factors * flake8 fix * resolve conflict * [FIX] pipeline options in `fit_pipeline` (#466) * fix update of pipeline config options in fit pipeline * fix flake and test * suggestions from review * [FIX] results management and visualisation with missing test data (#465) * add flexibility to avoid checking for test scores * fix flake and test * fix bug in tests * suggestions from review * [ADD] Robustly refit models in final ensemble in parallel (#471) * add parallel model runner and update running traditional classifiers * update pipeline config to pipeline options * working refit function * fix mypy and flake * suggestions from review * fix mypy and flake * suggestions from review * finish documentation * fix tests * add test for parallel model runner * fix flake * fix tests * fix traditional prediction for refit * suggestions from review * add warning for failed processing of results * remove unnecessary change * update autopytorch version number * update autopytorch version number and the example file * [DOCS] Release notes v0.2.1 (#476) * Release 0.2.1 * add release docs * Update docs/releases.rst Co-authored-by: Difan Deng <[email protected]>
Fixes issue #455
Types of changes
Note that a Pull Request should only contain one of refactoring, new features or documentation changes.
Please separate these changes and send us individual PRs for each.
For more information on how to create a good pull request, please refer to The anatomy of a perfect pull request.
Checklist:
Description
Currently, the results_manager.py and results_visualiser.py assumes that we always have the test data. However, in the API class, we allow the users to run the optimisation without giving test data and I think it is a common use case, especially in AutoML. This PR makes the necessary changes to allow visualising and storing results when no test data is passed.
Motivation and Context
Fixes #455
How has this been tested?
I have added tests where the run history does not contain the test scores and the tests ensure that the sprint_statistics and the plot_perf_over_time functions work with this run history