OSRL (Offline Safe Reinforcement Learning) offers a collection of elegant and extensible implementations of state-of-the-art offline safe reinforcement learning (RL) algorithms. Aimed at propelling research in offline safe RL, OSRL serves as a solid foundation to implement, benchmark, and iterate on safe RL solutions. This repository is heavily inspired by the CORL library for offline RL, check them out too!
The OSRL package is a crucial component of our larger benchmarking suite for offline safe learning, which also includes DSRL and FSRL, and is built to facilitate the development of robust and reliable offline safe RL solutions.
To learn more, please visit our project website.
The structure of this repo is as follows:
βββ examples
β βββ configs # the training configs of each algorithm
β βββ eval # the evaluation escipts
β βββ train # the training scipts
βββ osrl
β βββ algorithms # offline safe RL algorithms
β βββ common # base networks and utils
The implemented offline safe RL and imitation learning algorithms include:
Algorithm | Type | Description |
---|---|---|
BCQ-Lag | Q-learning | BCQ with PID Lagrangian |
BEAR-Lag | Q-learning | BEARL with PID Lagrangian |
CPQ | Q-learning | Constraints Penalized Q-learning (CPQ)) |
COptiDICE | Distribution Correction Estimation | Offline Constrained Policy Optimization via stationary DIstribution Correction Estimation |
CDT | Sequential Modeling | Constrained Decision Transformer |
BC-All | Imitation Learning | Behavior Cloning with all datasets |
BC-Safe | Imitation Learning | Behavior Cloning with safe trajectories |
BC-Frontier | Imitation Learning | Behavior Cloning with high-reward trajectories |
OSRL is currently hosted on PyPI, you can simply install it by:
pip install osrl-lib
You can also pull the repo and install:
git clone https://github.com/liuzuxin/OSRL.git
cd osrl
pip install -e .
If you want to use the CDT
algorithm, please also manually install the OApackage
:
pip install OApackage==2.7.6
The example usage are in the examples
folder, where you can find the training and evaluation scripts for all the algorithms.
All the parameters and their default configs for each algorithm are available in the examples/configs
folder.
OSRL uses the WandbLogger
in FSRL and Pyrallis configuration system. The offline dataset and offline environments are provided in DSRL, so make sure you install both of them first.
For example, to train the bcql
method, simply run by overriding the default parameters:
python examples/train/train_bcql.py --task OfflineCarCircle-v0 --param1 args1 ...
By default, the config file and the logs during training will be written to logs\
folder and the training plots can be viewed online using Wandb.
You can also launch a sequence of experiments or in parallel via the EasyRunner package, see examples/train_all_tasks.py
for details.
To evaluate a trained agent, for example, a BCQ agent, simply run
python examples/eval/eval_bcql.py --path path_to_model --eval_episodes 20
It will load config file from path_to_model/config.yaml
and model file from path_to_model/checkpoints/model.pt
, run 20 episodes, and print the average normalized reward and cost. The pretrained checkpoints for all datasets are available here for reference.
The framework design and most baseline implementations of OSRL are heavily inspired by the CORL project, which is a great library for offline RL, and the cleanrl project, which targets online RL. So do check them out if you are interested!
If you have any suggestions or find any bugs, please feel free to submit an issue or a pull request. We welcome contributions from the community!