-
Notifications
You must be signed in to change notification settings - Fork 7.2k
[tune] Tune experiment analysis improvements #10645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
richardliaw
merged 21 commits into
ray-project:master
from
krfricke:tune-experiment-analysis
Sep 9, 2020
Merged
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
b5e10c4
added `metric` and `mode` arguments to `tune.run()`
4143008
Updated search algorithms
5eadd6c
Merge branch 'master' into tune-mode-metric
6ba754a
Updated seacher base class
6a73824
lint
4bbe2df
Fix tests
6a73ae1
Trigger new build
e998ee0
Update experiment analysis
31567b5
Set default mode and metric for experiment analysis
82bbb1d
Merge branch 'tune-mode-metric' into tune-experiment-analysis
05a7e41
Added easy to use utility functions for experiment analysis
5b8ce91
Updated docs
b015fcb
Use tune sklearn master
30749c3
Update shim default args
33fbbba
Merge branch 'master' into tune-mode-metric
f978d89
Merge branch 'tune-mode-metric' into tune-experiment-analysis
c82f079
Fix dataframe tests
d909452
Fix dataframe tests
4f452f1
Fix errors
fec8d64
Updated docs and type hints
e76e06a
Merge branch 'master' into tune-experiment-analysis
richardliaw File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,11 +1,17 @@ | ||
| import json | ||
| import logging | ||
| import os | ||
| from typing import Dict | ||
|
|
||
| from ray.tune.checkpoint_manager import Checkpoint | ||
| from ray.tune.utils import flatten_dict | ||
|
|
||
| try: | ||
| import pandas as pd | ||
| from pandas import DataFrame | ||
| except ImportError: | ||
| pd = None | ||
| DataFrame = None | ||
|
|
||
| from ray.tune.error import TuneError | ||
| from ray.tune.result import EXPR_PROGRESS_FILE, EXPR_PARAM_FILE,\ | ||
|
|
@@ -80,6 +86,9 @@ def dataframe(self, metric=None, mode=None): | |
| Returns: | ||
| pd.DataFrame: Constructed from a result dict of each trial. | ||
| """ | ||
| metric = self._validate_metric(metric) | ||
| mode = self._validate_mode(mode) | ||
|
|
||
| rows = self._retrieve_rows(metric=metric, mode=mode) | ||
| all_configs = self.get_all_configs(prefix=True) | ||
| for path, config in all_configs.items(): | ||
|
|
@@ -227,6 +236,9 @@ def get_best_checkpoint(self, trial, metric=None, mode=None): | |
| mode = self._validate_mode(mode) | ||
|
|
||
| checkpoint_paths = self.get_trial_checkpoints_paths(trial, metric) | ||
| if not checkpoint_paths: | ||
| logger.error(f"No checkpoints have been found for trial {trial}.") | ||
| return None | ||
| if mode == "max": | ||
| return max(checkpoint_paths, key=lambda x: x[1])[0] | ||
| else: | ||
|
|
@@ -316,7 +328,150 @@ def __init__(self, | |
| os.path.dirname(experiment_checkpoint_path), default_metric, | ||
| default_mode) | ||
|
|
||
| def get_best_trial(self, metric=None, mode=None, scope="all"): | ||
| @property | ||
| def best_trial(self) -> Trial: | ||
| """Get the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_trial(metric, mode, scope)` instead. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_trial`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`. Alternatively, use the " | ||
| "`get_best_trial(metric, mode)` method to set the metric " | ||
| "and mode explicitly.") | ||
| return self.get_best_trial(self.default_metric, self.default_mode) | ||
|
|
||
| @property | ||
| def best_config(self) -> Dict: | ||
| """Get the config of the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_config(metric, mode, scope)` instead. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_config`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`. Alternatively, use the " | ||
| "`get_best_config(metric, mode)` method to set the metric " | ||
| "and mode explicitly.") | ||
| return self.get_best_config(self.default_metric, self.default_mode) | ||
|
|
||
| @property | ||
| def best_checkpoint(self) -> Checkpoint: | ||
| """Get the checkpoint of the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_checkpoint(trial, metric, mode)` instead. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_checkpoint`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`. Alternatively, use the " | ||
| "`get_best_checkpoint(trial, metric, mode)` method to set the " | ||
| "metric and mode explicitly.") | ||
| best_trial = self.best_trial | ||
| return self.get_best_checkpoint(best_trial, self.default_metric, | ||
| self.default_mode) | ||
|
|
||
| @property | ||
| def best_logdir(self) -> str: | ||
| """Get the logdir of the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_logdir(metric, mode)` instead. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_logdir`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`. Alternatively, use the " | ||
| "`get_best_logdir(metric, mode, scope)` method to set the " | ||
| "metric and mode explicitly.") | ||
| return self.get_best_logdir(self.default_metric, self.default_mode) | ||
|
|
||
| @property | ||
| def best_dataframe(self) -> DataFrame: | ||
| """Get the full result dataframe of the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_logdir(metric, mode)` and use it to look for the dataframe | ||
| in the `self.trial_dataframes` dict. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_result`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`.") | ||
| best_logdir = self.best_logdir | ||
| return self.trial_dataframes[best_logdir] | ||
|
|
||
| @property | ||
| def best_result(self) -> Dict: | ||
| """Get the last result of the best trial of the experiment | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_trial(metric, mode, scope).last_result` instead. | ||
| """ | ||
| if not self.default_metric or not self.default_mode: | ||
| raise ValueError( | ||
| "To fetch the `best_result`, pass a `metric` and `mode` " | ||
| "parameter to `tune.run()`. Alternatively, use " | ||
| "`get_best_trial(metric, mode).last_result` to set " | ||
| "the metric and mode explicitly and fetch the last result.") | ||
| return self.best_trial.last_result | ||
|
|
||
| @property | ||
| def best_result_df(self) -> DataFrame: | ||
| """Get the best result of the experiment as a pandas dataframe. | ||
|
|
||
| The best trial is determined by comparing the last trial results | ||
| using the `metric` and `mode` parameters passed to `tune.run()`. | ||
|
|
||
| If you didn't pass these parameters, use | ||
| `get_best_trial(metric, mode, scope).last_result` instead. | ||
| """ | ||
| if not pd: | ||
| raise ValueError("`best_result_df` requires pandas. Install with " | ||
| "`pip install pandas`.") | ||
| best_result = flatten_dict(self.best_result, delimiter=".") | ||
| return pd.DataFrame.from_records([best_result], index="trial_id") | ||
|
|
||
| @property | ||
| def results(self) -> Dict[str, Dict]: | ||
| """Get the last result of the all trials of the experiment""" | ||
| return {trial.trial_id: trial.last_result for trial in self.trials} | ||
|
|
||
| @property | ||
| def results_df(self) -> DataFrame: | ||
| if not pd: | ||
| raise ValueError("`best_result_df` requires pandas. Install with " | ||
| "`pip install pandas`.") | ||
| return pd.DataFrame.from_records( | ||
| [ | ||
| flatten_dict(trial.last_result, delimiter=".") | ||
| for trial in self.trials | ||
| ], | ||
| index="trial_id") | ||
|
|
||
| def get_best_trial(self, metric=None, mode=None, scope="last"): | ||
| """Retrieve the best trial object. | ||
|
|
||
| Compares all trials' scores on ``metric``. | ||
|
|
@@ -380,7 +535,7 @@ def get_best_trial(self, metric=None, mode=None, scope="all"): | |
| "parameter?") | ||
| return best_trial | ||
|
|
||
| def get_best_config(self, metric=None, mode=None, scope="all"): | ||
| def get_best_config(self, metric=None, mode=None, scope="last"): | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice - need to explicitly say this on the release notes |
||
| """Retrieve the best config corresponding to the trial. | ||
|
|
||
| Compares all trials' scores on `metric`. | ||
|
|
@@ -407,7 +562,7 @@ def get_best_config(self, metric=None, mode=None, scope="all"): | |
| best_trial = self.get_best_trial(metric, mode, scope) | ||
| return best_trial.config if best_trial else None | ||
|
|
||
| def get_best_logdir(self, metric=None, mode=None, scope="all"): | ||
| def get_best_logdir(self, metric=None, mode=None, scope="last"): | ||
| """Retrieve the logdir corresponding to the best trial. | ||
|
|
||
| Compares all trials' scores on `metric`. | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.