RL-Lib torch is an RL library based in pytorch. It's meant to support my research but I welcome other researchers to use it.
The two main components are environments and agents, which interact with each other. The main logic of each agent is implemented in the algorithms folder. For example, a SAC agent has the training logic implemented in the Agent, but the training losses are implemented in the sac algorithm.
To install create a conda environment:
$ conda create -n rllib python=3.7
$ conda activate rllib
$ pip install -e .[test,logging,experiments]
For Mujoco (license required) Run:
$ pip install -e .[mujoco]
On clusters run:
$ sudo apt-get install -y --no-install-recommends --quiet build-essential libopenblas-dev python-opengl xvfb xauth
$ python exps/run $ENVIRONMENT $AGENT
For help, see
$ python exps/run.py --help
install pre-commit with
$ pip install pre-commit
$ pre-commit install
Run pre-commit with
$ pre-commit run --all-files
To run locally circleci run:
$ circleci config process .circleci/config.yml > process.yml
$ circleci local execute -c process.yml --job test
Environment goals are passed to the agent through agent.set_goal(goal). If a goal moves during an episode, then include it in the observation space of the environment. If a goal is to follow a trajectory, it might be a good idea to encode it in the reward model.
Continuous Policies are "bounded" between [-1, 1] via a tanh transform unless otherwise defined. For environments with action spaces with different bounds, up(down)-scale the action after sampling it.
RL-Lib is licensed under MIT License.
If you use RL-lib in your research please use the following BibTeX entry:
@Misc{Curi2019RLLib,
author = {Sebastian Curi},
title = {RL-Lib - A pytorch-based library for Reinforcement Learning research.},
howpublished = {Github},
year = {2020},
url = {https://github.com/sebascuri/rllib}
}