Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train mode with resume; enjoy mode refactor #455

Merged
merged 19 commits into from
Apr 14, 2020
Merged

train mode with resume; enjoy mode refactor #455

merged 19 commits into from
Apr 14, 2020

Conversation

kengz
Copy link
Owner

@kengz kengz commented Apr 14, 2020

train mode with resume

Fixes #444. This adds capability to resume a training in a past-future-consistent manner. See explanation below.

Suppose we run a training with 10 million (10M) frames to completion, and see that further improvements may be possible if we had run it for longer, say 20M frames. If only we could go back in time and set the frames to 20M to begin with.

The resume mode allows us to do that without time traveling. We can edit the spec file in the present and resume training so the run picks up where it left off as if it was already using the edited spec. Of course, the modification to the spec file must itself be consistent to the past and the future, e.g. we cannot suddenly modify the initial learning rate or variable values.

To achieve this, the lab relies on 3 objects and their load methods

  • algorithm.load(): this already loads the algorithm and their model weights for enjoy mode, now it's used for train@ mode
  • body.train_df: this object tracks the training metrics data, hence needs to be loaded
  • env.clock: this tracks the time within the session.

Since everything in the lab runs according to env.clock, the above are all we need to restore for resuming training. Once the network and training metrics are restored, and the clock is set correctly, everything runs from the designated point in time.

NOTE: for off-policy algorithms the replay memory is not restored simply due to the cost of storing replay data (GBs of data per session and slow write during frequent checkpoints). Hence the behavior of off-policy replay is slightly different: it will need to fill up again from resume-point and training will only start again at the specified replay size threshold, so we will lose a small fraction of the total timesteps.

Usage example

Specify train mode as train@{predir}, where {predir} is the data directory of the last training run, or simply use latest` to use the latest. e.g.:

python run_lab.py slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json reinforce_cartpole train
# terminate run before its completion
# optionally edit the spec file in a past-future-consistent manner

# run resume with either of the commands:
python run_lab.py slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json reinforce_cartpole train@latest
# or to use a specific run folder
python run_lab.py slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json reinforce_cartpole train@data/reinforce_cartpole_2020_04_13_232521

enjoy mode refactor

The train@ resume mode API allows for the enjoy mode to be refactored. Both share similar syntax. Continuing with the example above, to enjoy a train model, we now use:

python run_lab.py slm_lab/spec/benchmark/reinforce/reinforce_cartpole.json reinforce_cartpole enjoy@data/reinforce_cartpole_2020_04_13_232521/reinforce_cartpole_t0_s0_spec.json

The refactored changes are summarized below:

  • API: enjoy@{prename} -> `enjoy@{session_spec_file}
  • removed eval_model_prepath and ckpt injection from meta spec and related methods
  • removed the need for ckpt entirely and related methods
  • refactored spec methods accordingly

Misc

  • cleaned up logging and self_desc for better clarity
  • renamed read_spec_and_run -> get_spec_and_run
  • renamed post_init_nets -> end_init_nets
  • renamed in_eval_lab_modes -> in_eval_lab_mode
  • added counterpart in_train_lab_mode

@kengz kengz merged commit 7605a82 into master Apr 14, 2020
@kengz kengz deleted the resume branch April 14, 2020 15:28
@kengz kengz mentioned this pull request Apr 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Resume a training
1 participant