Skip to content

Commit cf52163

Browse files
authored
Simplify readme (#2131)
1 parent 751ae1a commit cf52163

File tree

2 files changed

+14
-45
lines changed

2 files changed

+14
-45
lines changed

README.md

+3-38
Original file line numberDiff line numberDiff line change
@@ -14,44 +14,9 @@ Check out the paper at [SMARTS: Scalable Multi-Agent Reinforcement Learning Trai
1414
![](docs/_static/smarts_envision.gif)
1515

1616
# Documentation
17-
:rotating_light: :bell: Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) . :bell: :rotating_light:
18-
19-
# Examples
20-
### Primitive
21-
1. [Egoless](examples/e1_egoless.py) example.
22-
+ Run a SMARTS simulation without any ego agents, but with only background traffic.
23-
1. [Single-Agent](examples/e2_single_agent.py) example.
24-
+ Run a SMARTS simulation with a single ego agent.
25-
1. [Multi-Agent](examples/e3_multi_agent.py) example.
26-
+ Run a SMARTS simulation with multiple ego agents.
27-
1. [Environment Config](examples/e4_environment_config.py) example.
28-
+ Demonstrate the main observation/action configuration of the environment.
29-
1. [Agent Zoo](examples/e5_agent_zoo.py) example.
30-
+ Demonstrate how the agent zoo works.
31-
1. [Agent interface example](examples/6_agent_interface.py)
32-
+ TODO demonstrate how the agent interface works.
33-
34-
### Integration examples
35-
A few more complex integrations are demonstrated.
36-
37-
1. Configurable example
38-
+ script: [examples/e7_experiment_base.py](examples/e7_experiment_base.py)
39-
+ Configurable agent number.
40-
+ Configurable agent type.
41-
+ Configurable environment.
42-
1. Parallel environments
43-
+ script: [examples/e8_parallel_environment.py](examples/e8_parallel_environment.py)
44-
+ Multiple SMARTS environments in parallel
45-
+ ActionSpaceType: LaneWithContinuousSpeed
46-
47-
### RL Examples
48-
1. [Drive](examples/e10_drive). See [Driving SMARTS 2023.1 & 2023.2](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_1.html) for more info.
49-
1. [VehicleFollowing](examples/e11_platoon). See [Driving SMARTS 2023.3](https://smarts.readthedocs.io/en/latest/benchmarks/driving_smarts_2023_3.html) for more info.
50-
1. [PG](examples/e12_rllib/pg_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
51-
1. [PG Population Based Training](examples/e12_rllib/pg_pbt_example.py). See [RLlib](https://smarts.readthedocs.io/en/latest/ecosystem/rllib.html) for more info.
52-
53-
### RL Environment
54-
1. [ULTRA](https://github.com/smarts-project/smarts-project.rl/blob/master/ultra) provides a gym-based environment built upon SMARTS to tackle intersection navigation, specifically the unprotected left turn.
17+
1. Read the docs :notebook_with_decorative_cover: at [smarts.readthedocs.io](https://smarts.readthedocs.io/) :fire:
18+
1. [Base examples](https://smarts.readthedocs.io/en/latest/examples/base_examples.html)
19+
1. [RL models](https://smarts.readthedocs.io/en/latest/examples/rl_model.html)
5520

5621
# Issues, Bugs, Feature Requests
5722
1. First, read how to communicate issues, report bugs, and request features [here](./docs/resources/contributing.rst#communication).

docs/ecosystem/rllib.rst

+11-7
Original file line numberDiff line numberDiff line change
@@ -4,16 +4,20 @@
44
RLlib
55
=====
66

7-
**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety
8-
of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such
9-
deep learning frameworks.
7+
**RLlib** is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. ``RLlib`` natively supports ``TensorFlow``, ``TensorFlow Eager``, and ``PyTorch``. Most of its internals are agnostic to such deep learning frameworks.
108

119
SMARTS contains two examples using `Policy Gradients (PG) <https://docs.ray.io/en/latest/rllib-algorithms.html#policy-gradients-pg>`_.
1210

13-
1. ``e12_rllib/pg_example.py``
14-
This example shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
15-
1. ``e12_rllib/pg_pbt_example.py``
16-
This example combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.
11+
#. Policy gradient
12+
13+
+ script: :examples:`e12_rllib/pg_example.py`
14+
+ Shows the basics of using RLlib with SMARTS through :class:`~smarts.env.rllib_hiway_env.RLlibHiWayEnv`.
15+
16+
#. Policy gradient with population based training
17+
18+
+ script: :examples:`e12_rllib/pg_pbt_example.py`
19+
+ Combines Policy Gradients with `Population Based Training (PBT) <https://docs.ray.io/en/latest/tune/api/doc/ray.tune.schedulers.PopulationBasedTraining.html>`_ scheduling.
20+
1721

1822
Recommended reads
1923
-----------------

0 commit comments

Comments
 (0)