Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tune.sample_from parallelized #126

Open
kapsl opened this issue Mar 17, 2020 · 1 comment
Open

tune.sample_from parallelized #126

kapsl opened this issue Mar 17, 2020 · 1 comment

Comments

@kapsl
Copy link

kapsl commented Mar 17, 2020

Hi,
is there a possibility to run the different trials in parallel (locally) or in the cloud?

@hartikainen
Copy link
Member

Hey @kapsl,

Yeah, the trials should automatically be parallelized locally if you run softlearning run_example_local and set the resources correctly. By correct, I mean that, for example, if your computer has 16 cpus and you want to allocate 4 cpus per trial, then you can run 4 trials (= 16 cpu / (4 cpu / trial)) by setting --trial-cpus=4 in your softlearning command (e.g. softlearning run_example_local ... --trial-cpus=4). If you have gpus available, you can also set those similarly with --trial-gpu=... (gpus support fractional resources).

For running things in cloud, you can do something very similar with softlearning run_example_{ec2,gce} .... However, this requires a bit more manual setup to configure the ray autoscaler for the cluster (e.g. the ray-autoscaler-gce.yaml) and to create a VM image with all the dependencies to be used on the cloud. If at some point you want to try this option, I'm happy to write clearer step-by-step instructions about it.

An action item for myself would be to document these features a bit better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants