Skip to content

A collection of offline reinforcement learning algorithms.

License

Notifications You must be signed in to change notification settings

polixir/OfflineRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OfflineRL

OfflineRL is a repository for Offline RL (batch reinforcement learning or offline reinforcement learning).

Re-implemented Algorithms

Model-free methods

  • CRR: Wang, Ziyu, et al. “Critic Regularized Regression.” Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 7768–7778. paper
  • CQL: Kumar, Aviral, et al. “Conservative Q-Learning for Offline Reinforcement Learning.” Advances in Neural Information Processing Systems, vol. 33, 2020. paper code
  • PLAS: Zhou, Wenxuan, et al. “PLAS: Latent Action Space for Offline Reinforcement Learning.” ArXiv Preprint ArXiv:2011.07213, 2020. website paper code
  • BCQ: Fujimoto, Scott, et al. “Off-Policy Deep Reinforcement Learning without Exploration.” International Conference on Machine Learning, 2018, pp. 2052–2062. paper code
  • EDAC: An, Gaon, et al. "Uncertainty-based offline reinforcement learning with diversified q-ensemble." Advances in neural information processing systems 34 (2021): 7436-7447. paper code
  • MCQ: Lyu, Jiafei, et al. "Mildly conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 1711-1724. paper code
  • TD3BC: Fujimoto, Scott, and Shixiang Shane Gu. "A minimalist approach to offline reinforcement learning." Advances in neural information processing systems 34 (2021): 20132-20145. paper code
  • PRDC: Ran, Yuhang, et al. “Policy Regularization with Dataset Constraint for Offline Reinforcement Learning.” International Conference on Machine Learning, 2023, pp. 28701-28717. paper code

Model-based methods

  • BREMEN: Matsushima, Tatsuya, et al. “Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization.” International Conference on Learning Representations, 2021. paper code
  • COMBO: Yu, Tianhe, et al. "COMBO: Conservative Offline Model-Based Policy Optimization." arXiv preprint arXiv:2102.08363 (2021). paper
  • MOPO: Yu, Tianhe, et al. “MOPO: Model-Based Offline Policy Optimization.” Advances in Neural Information Processing Systems, vol. 33, 2020. paper code
  • MAPLE: Xiong-Hui Chen, et al. "MAPLE: Offline Model-based Adaptable Policy Learning". Advances in Neural Information Processing Systems, vol. 34, 2021. paper code
  • MOBILE: Yihao Sun, et al. "Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning". Proceedings of the 40th International Conference on Machine Learning, PMLR 202:33177-33194, 2023. paper code
  • RAMBO: Rigter, Marc, Bruno Lacerda, and Nick Hawes. "Rambo-rl: Robust adversarial model-based offline reinforcement learning." Advances in neural information processing systems 35 (2022): 16082-16097. paper code

Install Datasets

NeoRL

git clone https://github.com/Polixir/neorl.git
cd neorl
pip install -e .

For more details on use, please see neorl.

D4RL (Optional)

pip install git+https://github.com/rail-berkeley/d4rl@master#egg=d4rl

For more details on use, please see d4rl.

Install offlinerl

pip install -e .

Example

# Training in HalfCheetah-v3-L-9 task using default parameters of cql algorithm
python examples/train_task.py --algo_name=cql --exp_name=halfcheetah --task HalfCheetah-v3 --task_data_type low --task_train_num 100

# Training in SafetyHalfCheetahtask using default parameters of cql algorithm
python examples/train_task.py --algo_name=mcq --exp_name=SafetyHalfCheetah --task SafetyHalfCheetah 

# Parameter search in the default parameter space using the cql algorithm in the HalfCheetah-v3-L-9 task
python examples/train_tune.py --algo_name=cql --exp_name=halfcheetah --task HalfCheetah-v3 --task_data_type low --task_train_num 100

# Parameter search in the default parameter space using the cql algorithm in the SafetyHalfCheetahtask task
# python examples/train_tune.py --algo_name=mcq --exp_name=SafetyHalfCheetah --task SafetyHalfCheetah 

# Training in D4RL halfcheetah-medium task using default parameters of cql algorithm (D4RL need to be installed)
python examples/train_d4rl.py --algo_name=cql --exp_name=d4rl-halfcheetah-medium-cql --task d4rl-halfcheetah-medium-v0

Parameters:

  • algo_name: Algorithm name . There are now bc, cql, plas, bcq and mopo algorithms available.
  • exp_name: Experiment name for easy visualization using aim.
  • task: Task name, See neorl for details.
  • task_data_type: Data level. Each task collects data using low, medium, and high level strategies in neorl.
  • task_train_num: Number of training data trajectories. For each task, neorl provides training data for up to 10000 trajectories.

View experimental results

We use Aim to store and visualize results. Aim is an experiment logger that is easy to manage thousands of experiments. For more details, see aim.

To visualize results in this repository:

cd offlinerl_tmp
aim up

Then you can see the results on http://127.0.0.1:43800.

Model-based Running Example

# Tune and save the transition models
python examples/model_tune.py --algo_name bc_model --exp_name neorl-RandomFrictionHopper-model --task RandomFrictionHopper
# Training MOPO and load the best transition model
python examples/train_task.py --algo_name mopo --exp_name neorl-safecheetah-mopo-new --task SafetyHalfCheetah --dynamics_path best_run_id

# Training COMBO and load the best transition model
python examples/train_task.py --algo_name combo --exp_name neorl-safecheetah-combo-new --task SafetyHalfCheetah --dynamics_path best_run_id

# Training RAMBO and load the best transition model
python examples/train_task.py --algo_name rambo --exp_name neorl-safecheetah-rambo-new --task SafetyHalfCheetah --dynamics_path best_run_id

# Training MOBILE and load the best transition model
python examples/train_task.py --algo_name mobile --exp_name neorl-safecheetah-mobile-new --task SafetyHalfCheetah --dynamics_path best_run_id

About

A collection of offline reinforcement learning algorithms.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages