[tune] refactor tune search space#10444
Conversation
…rch-space # Conflicts: # python/ray/tune/sample.py # python/ray/tune/suggest/ax.py # python/ray/tune/suggest/bayesopt.py # python/ray/tune/suggest/optuna.py # python/ray/tune/tests/test_sample.py
| def sample(self, | ||
| domain: "Float", | ||
| spec: Optional[Union[List[Dict], Dict]] = None, | ||
| size: int = 1): |
There was a problem hiding this comment.
iirc size isn't actually used normally right?
There was a problem hiding this comment.
Only in tests currently, where it's quite handy to check for distribution properties. However I could just sample several times in the tests.
It feels kind of natural to have that parameter for a sample method, but I agree that we currently do not expose it to users and could thus remove it for now.
There was a problem hiding this comment.
IMO we don't need size but if we decide to keep it we should make it a shape of the return tensor (e.g., tuple of int) to have some consistency with torch.distributions, np.random, tensorflow_probability, etc.
richardliaw
left a comment
There was a problem hiding this comment.
Looks good to me. Let's add top-level documentation in a separate PR?
|
One last comment - we should be much more aggressive about validation when specifying a config. Specifically: config = {"a": 1, "b": tune.uniform(0, 1)}
# This should raise a hard error
tune.run(func, config=config, search_alg=HyperOptSearch({"c": hp.uniform("c", 1, 2)}))
config = {"a": 1}
# This should work fine
tune.run(func, config=config, search_alg=HyperOptSearch({"c": hp.uniform("c", 1, 2)}))
config = {"a": 1}
# This should raise a hard error
tune.run(func, config=config, search_alg=HyperOptSearch(
{"c": hp.uniform("c", 1, 2), "b": tune.uniform(0, 1)}))Right now we just raise a warning but imo it's most certainly incorrect and you get this weird message like: |
|
I added a check for unresolved values in the |
Old discussion: #10401
Why are these changes needed?
This introduces a new search space representation that makes it possible to convert a Tune search space to other search algorithm definitions.
This also introduces new sampling methods, like quantized variants
uniformandloguniform, calledquniformandqloguniform, respectively.With these abstractions we get a natural way to distinguish between allowed parameter values (called
Domains) and the sampling methods (e.g. uniform, loguniform, normal). Theoretically users can introduce their own domains and custom samplers (like sampling from a Beta distribution or so). The underlying API is quite flexible, e.g.Float(1e-4, 1e-2).loguniform().quantized(5e-3). This API is currently hidden behind the tune sampler functions, liketune.qloguniform(1e-4, 1e-2, 5e-3).Converting Tune search space definitions to search spaces for external search algorithms, like AxSearch, HyperOpt, BayesOpt, etc. ist straightforward. If a search algorithm doesn't support specific sampling methods, they can be dropped with a warning, or an error can be raised. For instance, BayesOpt doesn't support custom sampling methods, and is only interested in parameter bounds. If someone passes
Float(1e-4, 1e-2).qloguniform(5e-3)to BayesOpt, it will be converted to the parameter bounds(1e-4, 1e-2)and a warning will be raised stating that the custom sampler has been dropped.Generally, this refactoring will introduce flexibility in defining and converting search spaces, while keeping full backwards compatibility.
Example usage:
External API:
Lower-level API equivalent:
Related issue number
Concerns #9969
Checks
scripts/format.shto lint the changes in this PR.