You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. added `test.yaml` for quickly verify RLs
2. change folder name from `algos` to `algorithms` for better reading
3. removed single agent recoder, all algorithms(sarl&marl) using `SimpleMovingAverageRecoder`
4. removed `GymVectorizedType` in `common/specs.py`
5. removed `common/train/*`, and implement unified training interface in `rls/train`
6. reconstructed `make_env` function in `rls/envs/make_env`
7. optimized function `load_config`
8. moved `off_policy_buffer.yaml` to `rls/configs/buffer`
9. removed configurations like `eval_while_train`, `add_noise2buffer` etc.
10. optimized environments' configuration files
11. optimized environment wrappers and implemented unified env interface for `gym` and `unity`, see `env_base.py`
12. updated dockerfiles
13. updated README
1. fixed rnn hidden states iteration
2. renamed `n_time_step` to `chunk_length`
2. added `train_interval` to both sarl and marl off-policy agorithms so as to control the training frequency related to data collecting
3. added `n_step_value` to calculate n-step return
4. updated README
The text was updated successfully, but these errors were encountered: