You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. removed sarl off-policy algorithm pd_ddpg, 'cause it's not in main stream
2. updated README
3. removed `iql` and added script `IndependentMA.py` instead to implement independent multi-agent algorithms
4. optimized summary writing
5. move NamedDict from 'rls.common.config' to 'rls.common.specs'
6. updated example config
7. updated `.gitignore`
8. added property `is_multi` to identify whether training task is for sarl or marl for both unity and gym
9. reconstructed inheritance relationships between algorithms and their's superclass
10. removed `1.e+18` in yaml files and use a large integer number instead, 'cause we want a large integer rather than float
…rn`. (#28, #45)
1. implemented function `n_step_return` to calculating $G_{t}^{n}$
2. implemented function `td_lambda_return` to calculating $TD(\lambda)$
3. renamed `no_save` to `is_save` and changed related command
4. removed `--prefill-steps`, `--info`, and `--save-frequency` in command, users could specify those parameters in configuration files
5. updated README
The text was updated successfully, but these errors were encountered: