From 43a06e6c88036b93cbf644e621eed2e9e5579c79 Mon Sep 17 00:00:00 2001 From: Xavier Bouthillier Date: Tue, 25 Feb 2020 10:26:59 -0500 Subject: [PATCH] Adapt doc to support for executables PR #333 adds support to entry points, removing the requirement to make scripts executable. This commit adapts to doc to this new feature. --- docs/src/examples/cluster.rst | 8 +++---- docs/src/examples/pytorch_a2c_ppo.rst | 12 +---------- docs/src/examples/pytorch_cifar.rst | 6 ------ docs/src/user/cli/info.rst | 2 +- docs/src/user/evc.rst | 8 +++---- docs/src/user/pytorch.rst | 31 ++++++++++----------------- 6 files changed, 21 insertions(+), 46 deletions(-) diff --git a/docs/src/examples/cluster.rst b/docs/src/examples/cluster.rst index d6a658d38..c27e91f16 100644 --- a/docs/src/examples/cluster.rst +++ b/docs/src/examples/cluster.rst @@ -19,7 +19,7 @@ together. A minimal Slurm script to launch 10 workers would thus only require th #SBATCH --array=1-10 - orion hunt -n parallel-exp ./main.py --lr~'loguniform(1e-5, 1.0)' + orion hunt -n parallel-exp python main.py --lr~'loguniform(1e-5, 1.0)' All workers are optimizing the experiment ``parallel-exp`` in parallel, each holding a copy of the optimization algorithm. Adding Slurm options to execute the mnist example with proper ressources @@ -36,7 +36,7 @@ gives the following #SBATCH --mem=10GB #SBATCH --time=2:59:00 - orion hunt -n parallel-exp --worker-trials 1 ./main.py --lr~'loguniform(1e-5, 1.0)' + orion hunt -n parallel-exp --worker-trials 1 python main.py --lr~'loguniform(1e-5, 1.0)' For now, Oríon does not provide detection of lost trials if a worker gets killed due to a timeout. Such trial would be indefinitely marked as ``pending`` in the DB and thus could not be @@ -57,7 +57,7 @@ character ``%`` (ex: ``#SBATCH --array=1-100%10``). #SBATCH --mem=10GB #SBATCH --time=2:59:00 - orion hunt -n parallel-exp --worker-trials 1 ./main.py --lr~'loguniform(1e-5, 1.0)' + orion hunt -n parallel-exp --worker-trials 1 python main.py --lr~'loguniform(1e-5, 1.0)' SSH tunnels @@ -135,7 +135,7 @@ These lines can then be added to the script to submit workers in parallel. ssh -o StrictHostKeyChecking=no -L $ORION_DB_PORT::27017 -n -N -f - orion hunt -n parallel-exp --worker-trials 1 ./main.py --lr~'loguniform(1e-5, 1.0)' + orion hunt -n parallel-exp --worker-trials 1 python main.py --lr~'loguniform(1e-5, 1.0)' Notes for MongoDB diff --git a/docs/src/examples/pytorch_a2c_ppo.rst b/docs/src/examples/pytorch_a2c_ppo.rst index 35e48330e..d17985f5f 100644 --- a/docs/src/examples/pytorch_a2c_ppo.rst +++ b/docs/src/examples/pytorch_a2c_ppo.rst @@ -44,18 +44,8 @@ we add: .. code-block:: python - #!/usr/bin/env python from orion.client import report_objective - -and then we run - -.. code-block:: bash - - chmod +x main.py - -To make it executable. - Then, we ensure that we evaluate on a separate set of hold out random seeds for the environment (which should be different than the test set and training seed). For MuJoCo environments where the random seed has an effect, we can simply set @@ -96,7 +86,7 @@ How to search for hyperparameters .. code-block:: bash orion -v hunt -n ppo_hopper \ - ./main.py --env-name "Hopper-v2" --algo ppo --use-gae --vis-interval 1 \ + python main.py --env-name "Hopper-v2" --algo ppo --use-gae --vis-interval 1 \ --log-interval 1 --num-stack 1 --num-steps 2048 --num-processes 1 \ --lr~'loguniform(1e-5, 1.0)' --entropy-coef 0 --value-loss-coef 1 \ --ppo-epoch 10 --num-mini-batch 32 --gamma~'uniform(.95, .9995)' --tau 0.95 \ diff --git a/docs/src/examples/pytorch_cifar.rst b/docs/src/examples/pytorch_cifar.rst index 1808da99c..84991373c 100644 --- a/docs/src/examples/pytorch_cifar.rst +++ b/docs/src/examples/pytorch_cifar.rst @@ -22,12 +22,6 @@ Set up git clone https://github.com/kuangliu/pytorch-cifar.git cd pytorch-cifar -Add python shebang (using ``sed`` here) - -.. code-block:: bash - - sed -i '1 i\#!/usr/bin/env python' main.py - Add to last line of test() .. code-block:: python diff --git a/docs/src/user/cli/info.rst b/docs/src/user/cli/info.rst index ae8685937..7ba08a107 100644 --- a/docs/src/user/cli/info.rst +++ b/docs/src/user/cli/info.rst @@ -18,7 +18,7 @@ Here is an example of all the sections provided by the command. Commandline =========== - --lr~loguniform(1e-5, 1.0) + python main.py --lr~loguniform(1e-5, 1.0) Config diff --git a/docs/src/user/evc.rst b/docs/src/user/evc.rst index 18d3b7a85..54f3e44d5 100644 --- a/docs/src/user/evc.rst +++ b/docs/src/user/evc.rst @@ -24,7 +24,7 @@ omitted from the command samples. .. code-block:: bash - $ orion hunt -n orion-tutorial ./main.py --lr~'loguniform(1e-5, 1.0)' --momentum~'uniform(0, 1)' + $ orion hunt -n orion-tutorial python main.py --lr~'loguniform(1e-5, 1.0)' --momentum~'uniform(0, 1)' This cannot be the same as the experiment ``orion-tutorial`` since the space of optimization is now different. Such a call will trigger an experiment branching, meaning that a new experiment will @@ -82,7 +82,7 @@ change our commandline like this. .. code-block:: bash - $ orion hunt -n orion-tutorial ./main.py --lr~'loguniform(1e-5, 1.0)' --momentum~+'uniform(0, 1)' + $ orion hunt -n orion-tutorial python main.py --lr~'loguniform(1e-5, 1.0)' --momentum~+'uniform(0, 1)' Let's look back at the prompt above. Following the resolution of ``momentum`` conflict we see that it is now marked as resolved in the `Resolutions` list, while the experiment name is still @@ -119,14 +119,14 @@ the prompt, and the resolution will be marked accordingly. .. code-block:: bash - $ orion hunt -n orion-tutorial -b orion-tutorial-with-momentum ./main.py --lr~'loguniform(1e-5, 1.0)' --momentum~+'uniform(0, 1)' + $ orion hunt -n orion-tutorial -b orion-tutorial-with-momentum python main.py --lr~'loguniform(1e-5, 1.0)' --momentum~+'uniform(0, 1)' You can execute again this branched experiment by reusing the same commandline but replacing the new experiment name ``orion-tutorial-with-momentum``. .. code-block:: bash - $ orion hunt -n orion-tutorial-with-momentum ./main.py --lr~'loguniform(1e-5, 1.0)' --momentum~'uniform(0, 1)' + $ orion hunt -n orion-tutorial-with-momentum python main.py --lr~'loguniform(1e-5, 1.0)' --momentum~'uniform(0, 1)' Or as always by only specifying the experiment name. diff --git a/docs/src/user/pytorch.rst b/docs/src/user/pytorch.rst index 8878ec1ce..28997eed7 100644 --- a/docs/src/user/pytorch.rst +++ b/docs/src/user/pytorch.rst @@ -25,12 +25,11 @@ PyTorch `examples repository`_: Adapting the code for Oríon =========================== -To use Oríon with any code we need to do three things +To use Oríon with any code we need to do two things -1. make the ``main.py`` file a python executable -2. import the ``orion.client.report_objective`` helper function -3. call `report_objective` on the final objective output to be minimized (e.g. final test error - rate) +1. import the ``orion.client.report_objective`` helper function +2. call `report_objective` on the final objective output to be minimized + (e.g. final test error rate) After cloning pytorch examples repository, cd to mnist folder: @@ -38,22 +37,14 @@ After cloning pytorch examples repository, cd to mnist folder: $ cd examples/mnist -1. In your favourite editor add a shebang line ``#!/usr/bin/env python`` to -the ``main.py`` and make it executable, for example: - -.. code-block:: bash - - $ sed -i '1s/^/#!/usr/bin/env python/' main.py - $ chmod +x main.py - -2. At the top of the file, below the imports, add one line of import the helper function +1. At the top of the file, below the imports, add one line of import for the helper function ``orion.client.report_objective()``: .. code-block:: python from orion.client import report_objective -3. We need the test error rate so we're going to add a line to the function ``test()`` to return it +2. We need the test error rate so we're going to add a line to the function ``test()`` to return it .. code-block:: python @@ -78,7 +69,7 @@ rather simple. Normally you would call the script the following way. .. code-block:: bash - $ ./main.py --lr 0.01 + $ python main.py --lr 0.01 To use it with Oríon, you simply need to prepend the call with ``orion hunt -n `` and specify the hyper-parameter prior @@ -86,9 +77,9 @@ distributions. .. code-block:: bash - $ orion hunt -n orion-tutorial ./main.py --lr~'loguniform(1e-5, 1.0)' + $ orion hunt -n orion-tutorial python main.py --lr~'loguniform(1e-5, 1.0)' -This commandline call will sequentially execute ``./main.py --lr=`` with random +This commandline call will sequentially execute ``python main.py --lr=`` with random values sampled from the distribution ``loguniform(1e-5, 1.0)``. We support all distributions from scipy.stats_, plus ``choices()`` for categorical hyper-parameters (similar to numpy's `choice function`_). @@ -114,7 +105,7 @@ You can also register experiments without executing them. .. code-block:: bash - $ orion init_only -n orion-tutorial ./main.py --lr~'loguniform(1e-5, 1.0)' + $ orion init_only -n orion-tutorial python main.py --lr~'loguniform(1e-5, 1.0)' Results @@ -171,7 +162,7 @@ don't use ``--debug`` you will likely quickly fill your database with broken exp .. code-block:: bash - $ orion --debug hunt -n orion-tutorial ./main.py --lr~'loguniform(1e-5, 1.0)' + $ orion --debug hunt -n orion-tutorial python main.py --lr~'loguniform(1e-5, 1.0)' Hunting Options ---------------