You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-38
Original file line number
Diff line number
Diff line change
@@ -14,44 +14,9 @@ Check out the paper at [SMARTS: Scalable Multi-Agent Reinforcement Learning Trai
14
14

15
15
16
16
# Documentation
17
-
:rotating_light::bell: Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) . :bell::rotating_light:
18
-
19
-
# Examples
20
-
### Primitive
21
-
1.[Egoless](examples/e1_egoless.py) example.
22
-
+ Run a SMARTS simulation without any ego agents, but with only background traffic.
1.[Drive](examples/e10_drive). See [Driving SMARTS 2023.1 & 2023.2](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_1.html) for more info.
49
-
1.[VehicleFollowing](examples/e11_platoon). See [Driving SMARTS 2023.3](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_3.html) for more info.
50
-
1.[PG](examples/e12_rllib/pg_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
51
-
1.[PG Population Based Training](examples/e12_rllib/pg_pbt_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
52
-
53
-
### RL Environment
54
-
1.[ULTRA](https://github.com/smarts-project/smarts-project.rl/blob/master/ultra) provides a gym-based environment built upon SMARTS to tackle intersection navigation, specifically the unprotected left turn.
17
+
1. Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/):fire:
Copy file name to clipboardExpand all lines: docs/ecosystem/rllib.rst
+11-7
Original file line number
Diff line number
Diff line change
@@ -4,16 +4,20 @@
4
4
RLlib
5
5
=====
6
6
7
-
**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety
8
-
of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such
9
-
deep learning frameworks.
7
+
**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such deep learning frameworks.
10
8
11
9
SMARTS contains two examples using `Policy Gradients (PG) <https://docs.ray.io/en/latest/rllib-algorithms.html#policy-gradients-pg>`_.
12
10
13
-
1. ``e12_rllib/pg_example.py``
14
-
This example shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
15
-
1. ``e12_rllib/pg_pbt_example.py``
16
-
This example combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.
11
+
#. Policy gradient
12
+
13
+
+ script: :examples:`e12_rllib/pg_example.py`
14
+
+ Shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
15
+
16
+
#. Policy gradient with population based training
17
+
18
+
+ script: :examples:`e12_rllib/pg_pbt_example.py`
19
+
+ Combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.
0 commit comments